Entries Tagged "SCADA"

Page 3 of 5

The Vulnerabilities Market and the Future of Security

Recently, there have been several articles about the new market in zero-day exploits: new and unpatched computer vulnerabilities. It’s not just software companies, who sometimes pay bounties to researchers who alert them of security vulnerabilities so they can fix them. And it’s not only criminal organizations, who pay for vulnerabilities they can exploit. Now there are governments, and companies who sell to governments, who buy vulnerabilities with the intent of keeping them secret so they can exploit them.

This market is larger than most people realize, and it’s becoming even larger. Forbes recently published a price list for zero-day exploits, along with the story of a hacker who received $250K from “a U.S. government contractor” (At first I didn’t believe the story or the price list, but I have been convinced that they both are true.) Forbes published a profile of a company called Vupen, whose business is selling zero-day exploits. Other companies doing this range from startups like Netragard and Endgame to large defense contractors like Northrop Grumman, General Dynamics, and Raytheon.

This is very different than in 2007, when researcher Charlie Miller wrote about his attempts to sell zero-day exploits; and a 2010 survey implied that there wasn’t much money in selling zero days. The market has matured substantially in the past few years.

This new market perturbs the economics of finding security vulnerabilities. And it does so to the detriment of us all.

I’ve long argued that the process of finding vulnerabilities in software systems increases overall security. This is because the economics of vulnerability hunting favored disclosure. As long as the principal gain from finding a vulnerability was notoriety, publicly disclosing vulnerabilities was the only obvious path. In fact, it took years for our industry to move from a norm of full-disclosure—announcing the vulnerability publicly and damn the consequences—to something called “responsible disclosure”: giving the software vendor a head start in fixing the vulnerability. Changing economics is what made the change stick: instead of just hacker notoriety, a successful vulnerability finder could land some lucrative consulting gigs, and being a responsible security researcher helped. But regardless of the motivations, a disclosed vulnerability is one that—at least in most cases—is patched. And a patched vulnerability makes us all more secure.

This is why the new market for vulnerabilities is so dangerous; it results in vulnerabilities remaining secret and unpatched. That it’s even more lucrative than the public vulnerabilities market means that more hackers will choose this path. And unlike the previous reward of notoriety and consulting gigs, it gives software programmers within a company the incentive to deliberately create vulnerabilities in the products they’re working on—and then secretly sell them to some government agency.

No commercial vendors perform the level of code review that would be necessary to detect, and prove mal-intent for, this kind of sabotage.

Even more importantly, the new market for security vulnerabilities results in a variety of government agencies around the world that have a strong interest in those vulnerabilities remaining unpatched. These range from law-enforcement agencies (like the FBI and the German police who are trying to build targeted Internet surveillance tools, to intelligence agencies like the NSA who are trying to build mass Internet surveillance tools, to military organizations who are trying to build cyber-weapons.

All of these agencies have long had to wrestle with the choice of whether to use newly discovered vulnerabilities to protect or to attack. Inside the NSA, this was traditionally known as the “equities issue,” and the debate was between the COMSEC (communications security) side of the NSA and the SIGINT (signals intelligence) side. If they found a flaw in a popular cryptographic algorithm, they could either use that knowledge to fix the algorithm and make everyone’s communications more secure, or they could exploit the flaw to eavesdrop on others—while at the same time allowing even the people they wanted to protect to remain vulnerable. This debate raged through the decades inside the NSA. From what I’ve heard, by 2000, the COMSEC side had largely won, but things flipped completely around after 9/11.

The whole point of disclosing security vulnerabilities is to put pressure on vendors to release more secure software. It’s not just that they patch the vulnerabilities that are made public—the fear of bad press makes them implement more secure software development processes. It’s another economic process; the cost of designing software securely in the first place is less than the cost of the bad press after a vulnerability is announced plus the cost of writing and deploying the patch. I’d be the first to admit that this isn’t perfect—there’s a lot of very poorly written software still out there—but it’s the best incentive we have.

We’ve always expected the NSA, and those like them, to keep the vulnerabilities they discover secret. We have been counting on the public community to find and publicize vulnerabilities, forcing vendors to fix them. With the rise of these new pressures to keep zero-day exploits secret, and to sell them for exploitation, there will be even less incentive on software vendors to ensure the security of their products.

As the incentive for hackers to keep their vulnerabilities secret grows, the incentive for vendors to build secure software shrinks. As a recent EFF essay put it, this is “security for the 1%.” And it makes the rest of us less safe.

This essay previously appeared on Forbes.com.

Edited to add (6/6): Brazillian Portuguese translation here.

EDITED TO ADD (6/12): This presentation makes similar points as my essay.

Posted on June 1, 2012 at 6:48 AMView Comments

Forever-Day Bugs

That’s a nice turn of phrase:

Forever day is a play on “zero day,” a phrase used to classify vulnerabilities that come under attack before the responsible manufacturer has issued a patch. Also called iDays, or “infinite days” by some researchers, forever days refer to bugs that never get fixed­—even when they’re acknowledged by the company that developed the software. In some cases, rather than issuing a patch that plugs the hole, the software maker simply adds advice to user manuals showing how to work around the threat.

The article is about bugs in industrial control systems, many of which don’t have a patching mechanism.

Posted on April 17, 2012 at 1:22 PMView Comments

Hack Against SCADA System

A hack against a SCADA system controlling a water pump in Illinois destroyed the pump.

We know absolutely nothing here about the attack or the attacker’s motivations. Was it on purpose? An accident? A fluke?

EDITED TO ADD (12/1): Despite all sorts of allegations that the Russians hacked the water pump, it turns out that it was all a misunderstanding:

Within a week of the report’s release, DHS bluntly contradicted the memo, saying that it could find no evidence that a hack occurred. In truth, the water pump simply burned out, as pumps are wont to do, and a government-funded intelligence center incorrectly linked the failure to an internet connection from a Russian IP address months earlier.

The end of the article makes the most important point, I think:

Joe Weiss says he’s shocked that a report like this was put out without any of the information in it being investigated and corroborated first.

“If you can’t trust the information coming from a fusion center, what is the purpose of having the fusion center sending anything out? That’s common sense,” he said. “When you read what’s in that [report] that is a really, really scary letter. How could DHS not have put something out saying they got this [information but] it’s preliminary?”

Asked if the fusion center is investigating how information that was uncorroborated and was based on false assumptions got into a distributed report, spokeswoman Bond said an investigation of that sort is the responsibility of DHS and the other agencies who compiled the report. The center’s focus, she said, was on how Weiss received a copy of the report that he should never have received.

“We’re very concerned about the leak of controlled information,” Bond said. “Our internal review is looking at how did this information get passed along, confidential or controlled information, get disseminated and put into the hands of users that are not approved to receive that information. That’s number one.”

Notice that the problem isn’t that a non-existent threat was over hyped in a report circulated in secret, but that the report became public. Never mind that if the report hadn’t become public, the report would have never been revealed as erroneous. How many other reports like this are being used to justify policies that are as erroneous as the data that supports them?

Posted on November 21, 2011 at 6:57 AMView Comments

Remotely Opening Prison Doors

This seems like a bad vulnerability:

Researchers have demonstrated a vulnerability in the computer systems used to control facilities at federal prisons that could allow an outsider to remotely take them over, doing everything from opening and overloading cell door mechanisms to shutting down internal communications systems.

[…]

The researchers began their work after Strauchs was called in by a warden to investigate an incident in which all the cell doors on one prison’s death row spontaneously opened. While the computers that are used for the system control and data acquisition (SCADA) systems that control prison doors and other systems in theory should not be connected to the Internet, the researchers found that there was an Internet connection associated with every prison system they surveyed. In some cases, prison staff used the same computers to browse the Internet; in others, the companies that had installed the software had put connections in place to do remote maintenance on the systems.

The weirdest part of the article was this last paragraph.

“You could open every cell door, and the system would be telling the control room they are all closed,” Strauchs, a former CIA operations officer, told the Times. He said that he thought the greatest threat was that the system would be used to create the conditions needed for the assassination of a target prisoner.

I guess that’s a threat. But the greatest threat?

EDITED TO ADD (11/14): The original paper.

Posted on November 14, 2011 at 7:14 AMView Comments

Attacking PLCs Controlling Prison Doors

Embedded system vulnerabilities in prisons:

Some of the same vulnerabilities that the Stuxnet superworm used to sabotage centrifuges at a nuclear plant in Iran exist in the country’s top high-security prisons, according to security consultant and engineer John Strauchs, who plans to discuss the issue and demonstrate an exploit against the systems at the DefCon hacker conference next week in Las Vegas.

Strauchs, who says he engineered or consulted on electronic security systems in more than 100 prisons, courthouses and police stations throughout the U.S. ­ including eight maximum-security prisons ­ says the prisons use programmable logic controllers to control locks on cells and other facility doors and gates. PLCs are the same devices that Stuxnet exploited to attack centrifuges in Iran.

This seems like a minor risk today; Stuxnet was a military-grade effort, and beyond the reach of your typical criminal organization. But that can only change, as people study and learn from the reverse-engineered Stuxnet code and as hacking PLCs becomes more common.

As we move from mechanical, or even electro-mechanical, systems to digital systems, and as we network those digital systems, this sort of vulnerability is going to only become more common.

Posted on August 2, 2011 at 6:23 AMView Comments

New Siemens SCADA Vulnerabilities Kept Secret

SCADA systems—computer systems that control industrial processes—are one of the ways a computer hack can directly affect the real world. Here, the fears multiply. It’s not bad guys deleting your files, or getting your personal information and taking out credit cards in your name; it’s bad guys spewing chemicals into the atmosphere and dumping raw sewage into waterways. It’s Stuxnet: centrifuges spinning out of control and destroying themselves. Never mind how realistic the threat is, it’s scarier.

Last week, a researcher was successfully pressured by the Department of Homeland Security not to disclose details “before Siemens could patch the vulnerabilities.”

Beresford wouldn’t say how many vulnerabilities he found in the Siemens products, but said he gave the company four exploit modules to test. He believes that at least one of the vulnerabilities he found affects multiple SCADA-system vendors, which share “commonality” in their products. Beresford wouldn’t reveal more details, but says he hopes to do so at a later date.

We’ve been living with full disclosure for so long that many people have forgotten what life was like before it was routine.

Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies—who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities “theoretical” and deny that they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability—and the company would release a really quick patch, apologize profusely, and then go on to explain that the whole thing was entirely the fault of the evil, vile hackers.

I wrote that in 2007. Siemens is doing it right now:

Beresford expressed frustration that Siemens appeared to imply the flaws in its SCADA systems gear might be difficult for a typical hacker to exploit because the vulnerabilities unearthed by NSS Labs “were discovered while working under special laboratory conditions with unlimited access to protocols and controllers.”

There were no “‘special laboratory conditions’ with ‘unlimited access to the protocols,'” Beresford wrote Monday about how he managed to find flaws in Siemens PLC gear that would allow an attacker to compromise them. “My personal apartment on the wrong side of town where I can hear gunshots at night hardly defines a special laboratory.” Beresford said he purchased the Siemens controllers with funding from his company and found the vulnerabilities, which he says hackers with bad intentions could do as well.

That’s precisely the point. Me again from 2007:

Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers…. But that assumes that hackers can’t discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.

With the pressure off, Siemens is motivated to deal with the PR problem and ignore the underlying security problem.

Posted on May 24, 2011 at 5:50 AMView Comments

34 SCADA Vulnerabilities Published

It’s hard to tell how serious this is.

Computer security experts who examined the code say the vulnerabilities are not highly dangerous on their own, because they would mostly just allow an attacker to crash a system or siphon sensitive data, and are targeted at operator viewing platforms, not the backend systems that directly control critical processes. But experts caution that the vulnerabilities could still allow an attacker to gain a foothold on a system to find additional security holes that could affect core processes.

Posted on April 1, 2011 at 6:58 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.