Entries Tagged "ICS"

Page 2 of 4

Another Stuxnet Post

Larry Constantine disputes David Sanger’s book about Stuxnet:

So, what did he get wrong? First of all, the Stuxnet worm did not escape into the wild. The analysis of initial infections and propagations by Symantec show that, in fact, that it never was widespread, that it affected computers in closely connected clusters, all of which involved collaborators or companies that had dealings with each other. Secondly, it couldn’t have escaped over the Internet, as Sanger’s account maintains, because it never had that capability built into it: It can only propagate over [a] local-area network, over removable media such as CDs, DVDs, or USB thumb drives. So it was never capable of spreading widely, and in fact the sequence of infections is always connected by a close chain. Another thing that Sanger got wrong … was the notion that the worm escaped when an engineer connected his computer to the PLCs that were controlling the centrifuges and his computer became infected, which then later spread over the Internet. This is also patently impossible because the software that was resident on the PLCs is the payload that directly deals with the centrifuge motors; it does not have the capability of infecting a computer because it doesn’t have any copy of the rest of the Stuxnet system, so that part of the story is simply impossible. In addition, the explanation offered in his book and in his article is that Stuxnet escaped because of an error in the code, with the Americans claiming it was the Israelis’ fault that suddenly allowed it to get onto the Internet because it no longer recognized its environment. Anybody who works in the field knows that this doesn’t quite make sense, but in fact the last version, the last revision to Stuxnet, according to Symantec, had been in March, and it wasn’t discovered until June 17. And in fact the mode of discovery had nothing to do with its being widespread in the wild because in fact it was discovered inside computers in Iran that were being supported by a Belarus antivirus company called VirusBlokAda.

EDITED TO ADD (9/14): Comment from Larry Constantine.

Posted on September 10, 2012 at 6:51 AMView Comments

The Failure of Anti-Virus Companies to Catch Military Malware

Mikko Hypponen of F-Secure attempts to explain why anti-virus companies didn’t catch Stuxnet, DuQu, and Flame:

When we went digging through our archive for related samples of malware, we were surprised to find that we already had samples of Flame, dating back to 2010 and 2011, that we were unaware we possessed. They had come through automated reporting mechanisms, but had never been flagged by the system as something we should examine closely. Researchers at other antivirus firms have found evidence that they received samples of the malware even earlier than this, indicating that the malware was older than 2010.

What this means is that all of us had missed detecting this malware for two years, or more. That’s a spectacular failure for our company, and for the antivirus industry in general.

It wasn’t the first time this has happened, either. Stuxnet went undetected for more than a year after it was unleashed in the wild, and was only discovered after an antivirus firm in Belarus was called in to look at machines in Iran that were having problems. When researchers dug back through their archives for anything similar to Stuxnet, they found that a zero-day exploit that was used in Stuxnet had been used before with another piece of malware, but had never been noticed at the time. A related malware called DuQu also went undetected by antivirus firms for over a year.

Stuxnet, Duqu and Flame are not normal, everyday malware, of course. All three of them were most likely developed by a Western intelligence agency as part of covert operations that weren’t meant to be discovered.

His conclusion is simply that the attackers — in this case, military intelligence agencies — are simply better than commercial-grade anti-virus programs.

The truth is, consumer-grade antivirus products can’t protect against targeted malware created by well-resourced nation-states with bulging budgets. They can protect you against run-of-the-mill malware: banking trojans, keystroke loggers and e-mail worms. But targeted attacks like these go to great lengths to avoid antivirus products on purpose. And the zero-day exploits used in these attacks are unknown to antivirus companies by definition. As far as we can tell, before releasing their malicious codes to attack victims, the attackers tested them against all of the relevant antivirus products on the market to make sure that the malware wouldn’t be detected. They have unlimited time to perfect their attacks. It’s not a fair war between the attackers and the defenders when the attackers have access to our weapons.

We really should have been able to do better. But we didn’t. We were out of our league, in our own game.

I don’t buy this. It isn’t just the military that tests its malware against commercial defense products; criminals do it, too. Virus and worm writers do it. Spam writers do it. This is the never-ending arms race between attacker and defender, and it’s been going on for decades. Probably the people who wrote Flame had a larger budget than a large-scale criminal organization, but their evasive techniques weren’t magically better. Note that F-Secure and others had samples of Flame; they just didn’t do anything about them.

I think the difference has more to do with the ways in which these military malware programs spread. That is, slowly and stealthily. It was never a priority to understand — and then write signatures to detect — the Flame samples because they were never considered a problem. Maybe they were classified as a one-off. Or as an anomaly. I don’t know, but it seems clear that conventional non-military malware writers who want to evade detection should adopt the propagation techniques of Flame, Stuxnet, and DuQu.

EDITED TO ADD (6/23): F-Secure responded. Unfortunately, it’s not a very substantive response. It’s a pity; I think there’s an interesting discussion to be had about why the anti-virus companies all missed Flame for so long.

Posted on June 19, 2012 at 7:11 AMView Comments

Cyberwar Treaties

We’re in the early years of a cyberwar arms race. It’s expensive, it’s destabilizing, and it threatens the very fabric of the Internet we use every day. Cyberwar treaties, as imperfect as they might be, are the only way to contain the threat.

If you read the press and listen to government leaders, we’re already in the middle of a cyberwar. By any normal definition of the word “war,” this is ridiculous. But the definition of cyberwar has been expanded to include government-sponsored espionage, potential terrorist attacks in cyberspace, large-scale criminal fraud, and even hacker kids attacking government networks and critical infrastructure. This definition is being pushed both by the military and by government contractors, who are gaining power and making money on cyberwar fear.

The danger is that military problems beg for military solutions. We’re starting to see a power grab in cyberspace by the world’s militaries: large-scale monitoring of networks, military control of Internet standards, even military takeover of cyberspace. Last year’s debate over an “Internet kill switch” is an example of this; it’s the sort of measure that might be deployed in wartime but makes no sense in peacetime. At the same time, countries are engaging in offensive actions in cyberspace, with tools like Stuxnet and Flame.

Arms races stem from ignorance and fear: ignorance of the other side’s capabilities, and fear that their capabilities are greater than yours. Once cyberweapons exist, there will be an impetus to use them. Both Stuxnet and Flame damaged networks other than their intended targets. Any military-inserted back doors in Internet systems make us more vulnerable to criminals and hackers. And it is only a matter of time before something big happens, perhaps by the rash actions of a low-level military officer, perhaps by a non-state actor, perhaps by accident. And if the target nation retaliates, we could find ourselves in a real cyberwar.

The cyberwar arms race is destabilizing.

International cooperation and treaties are the only way to reverse this. Banning cyberweapons entirely is a good goal, but almost certainly unachievable. More likely are treaties that stipulate a no-first-use policy, outlaw unaimed or broadly targeted weapons, and mandate weapons that self-destruct at the end of hostilities. Treaties that restrict tactics and limit stockpiles could be a next step. We could prohibit cyberattacks against civilian infrastructure; international banking, for example, could be declared off-limits.

Yes, enforcement will be difficult. Remember how easy it was to hide a chemical weapons facility? Hiding a cyberweapons facility will be even easier. But we’ve learned a lot from our Cold War experience in negotiating nuclear, chemical, and biological treaties. The very act of negotiating limits the arms race and paves the way to peace. And even if they’re breached, the world is safer because the treaties exist.

There’s a common belief within the U.S. military that cyberweapons treaties are not in our best interest: that we currently have a military advantage in cyberspace that we should not squander. That’s not true. We might have an offensive advantage­although that’s debatable­but we certainly don’t have a defensive advantage. More importantly, as a heavily networked country, we are inherently vulnerable in cyberspace.

Cyberspace threats are real. Military threats might get the publicity, but the criminal threats are both more dangerous and more damaging. Militarizing cyberspace will do more harm than good. The value of a free and open Internet is enormous.

Stop cyberwar fear mongering. Ratchet down cyberspace saber rattling. Start negotiations on limiting the militarization of cyberspace and increasing international police cooperation. This won’t magically make us safe, but it will make us safer.

This essay first appeared on the U.S. News and World Report website, as part of a series of essays on the question: “Should there be an international treaty on cyberwarfare?”

Posted on June 14, 2012 at 6:40 AMView Comments

Flame

Flame seems to be another military-grade cyber-weapon, this one optimized for espionage. The worm is at least two years old, and is mainly confined to computers in the Middle East. (It does not replicate and spread automatically, which is certainly so that its controllers can target it better and evade detection longer.) And its espionage capabilities are pretty impressive. We’ll know more in the coming days and weeks as different groups start analyzing it and publishing their results.

EDITED TO ADD (6/11): Flame’s use of spoofed Microsoft security certificates. Flame’s use of a yet unknown MD5 chosen-prefix collision attack.

Microsoft has a detailed blog post on the attack. The attackers managed to to get a valid codesigning certificate using a signer which only accepts restricted client certificates.

EDITED TO ADD (6/12): MITM attack in the worm. There’s a connection to Stuxnet. A self-destruct command was apparently sent.

Posted on June 4, 2012 at 6:21 AMView Comments

The Vulnerabilities Market and the Future of Security

Recently, there have been several articles about the new market in zero-day exploits: new and unpatched computer vulnerabilities. It’s not just software companies, who sometimes pay bounties to researchers who alert them of security vulnerabilities so they can fix them. And it’s not only criminal organizations, who pay for vulnerabilities they can exploit. Now there are governments, and companies who sell to governments, who buy vulnerabilities with the intent of keeping them secret so they can exploit them.

This market is larger than most people realize, and it’s becoming even larger. Forbes recently published a price list for zero-day exploits, along with the story of a hacker who received $250K from “a U.S. government contractor” (At first I didn’t believe the story or the price list, but I have been convinced that they both are true.) Forbes published a profile of a company called Vupen, whose business is selling zero-day exploits. Other companies doing this range from startups like Netragard and Endgame to large defense contractors like Northrop Grumman, General Dynamics, and Raytheon.

This is very different than in 2007, when researcher Charlie Miller wrote about his attempts to sell zero-day exploits; and a 2010 survey implied that there wasn’t much money in selling zero days. The market has matured substantially in the past few years.

This new market perturbs the economics of finding security vulnerabilities. And it does so to the detriment of us all.

I’ve long argued that the process of finding vulnerabilities in software systems increases overall security. This is because the economics of vulnerability hunting favored disclosure. As long as the principal gain from finding a vulnerability was notoriety, publicly disclosing vulnerabilities was the only obvious path. In fact, it took years for our industry to move from a norm of full-disclosure — announcing the vulnerability publicly and damn the consequences — to something called “responsible disclosure”: giving the software vendor a head start in fixing the vulnerability. Changing economics is what made the change stick: instead of just hacker notoriety, a successful vulnerability finder could land some lucrative consulting gigs, and being a responsible security researcher helped. But regardless of the motivations, a disclosed vulnerability is one that — at least in most cases — is patched. And a patched vulnerability makes us all more secure.

This is why the new market for vulnerabilities is so dangerous; it results in vulnerabilities remaining secret and unpatched. That it’s even more lucrative than the public vulnerabilities market means that more hackers will choose this path. And unlike the previous reward of notoriety and consulting gigs, it gives software programmers within a company the incentive to deliberately create vulnerabilities in the products they’re working on — and then secretly sell them to some government agency.

No commercial vendors perform the level of code review that would be necessary to detect, and prove mal-intent for, this kind of sabotage.

Even more importantly, the new market for security vulnerabilities results in a variety of government agencies around the world that have a strong interest in those vulnerabilities remaining unpatched. These range from law-enforcement agencies (like the FBI and the German police who are trying to build targeted Internet surveillance tools, to intelligence agencies like the NSA who are trying to build mass Internet surveillance tools, to military organizations who are trying to build cyber-weapons.

All of these agencies have long had to wrestle with the choice of whether to use newly discovered vulnerabilities to protect or to attack. Inside the NSA, this was traditionally known as the “equities issue,” and the debate was between the COMSEC (communications security) side of the NSA and the SIGINT (signals intelligence) side. If they found a flaw in a popular cryptographic algorithm, they could either use that knowledge to fix the algorithm and make everyone’s communications more secure, or they could exploit the flaw to eavesdrop on others — while at the same time allowing even the people they wanted to protect to remain vulnerable. This debate raged through the decades inside the NSA. From what I’ve heard, by 2000, the COMSEC side had largely won, but things flipped completely around after 9/11.

The whole point of disclosing security vulnerabilities is to put pressure on vendors to release more secure software. It’s not just that they patch the vulnerabilities that are made public — the fear of bad press makes them implement more secure software development processes. It’s another economic process; the cost of designing software securely in the first place is less than the cost of the bad press after a vulnerability is announced plus the cost of writing and deploying the patch. I’d be the first to admit that this isn’t perfect — there’s a lot of very poorly written software still out there — but it’s the best incentive we have.

We’ve always expected the NSA, and those like them, to keep the vulnerabilities they discover secret. We have been counting on the public community to find and publicize vulnerabilities, forcing vendors to fix them. With the rise of these new pressures to keep zero-day exploits secret, and to sell them for exploitation, there will be even less incentive on software vendors to ensure the security of their products.

As the incentive for hackers to keep their vulnerabilities secret grows, the incentive for vendors to build secure software shrinks. As a recent EFF essay put it, this is “security for the 1%.” And it makes the rest of us less safe.

This essay previously appeared on Forbes.com.

Edited to add (6/6): Brazillian Portuguese translation here.

EDITED TO ADD (6/12): This presentation makes similar points as my essay.

Posted on June 1, 2012 at 6:48 AMView Comments

Forever-Day Bugs

That’s a nice turn of phrase:

Forever day is a play on “zero day,” a phrase used to classify vulnerabilities that come under attack before the responsible manufacturer has issued a patch. Also called iDays, or “infinite days” by some researchers, forever days refer to bugs that never get fixed­–even when they’re acknowledged by the company that developed the software. In some cases, rather than issuing a patch that plugs the hole, the software maker simply adds advice to user manuals showing how to work around the threat.

The article is about bugs in industrial control systems, many of which don’t have a patching mechanism.

Posted on April 17, 2012 at 1:22 PMView Comments

Hack Against SCADA System

A hack against a SCADA system controlling a water pump in Illinois destroyed the pump.

We know absolutely nothing here about the attack or the attacker’s motivations. Was it on purpose? An accident? A fluke?

EDITED TO ADD (12/1): Despite all sorts of allegations that the Russians hacked the water pump, it turns out that it was all a misunderstanding:

Within a week of the report’s release, DHS bluntly contradicted the memo, saying that it could find no evidence that a hack occurred. In truth, the water pump simply burned out, as pumps are wont to do, and a government-funded intelligence center incorrectly linked the failure to an internet connection from a Russian IP address months earlier.

The end of the article makes the most important point, I think:

Joe Weiss says he’s shocked that a report like this was put out without any of the information in it being investigated and corroborated first.

“If you can’t trust the information coming from a fusion center, what is the purpose of having the fusion center sending anything out? That’s common sense,” he said. “When you read what’s in that [report] that is a really, really scary letter. How could DHS not have put something out saying they got this [information but] it’s preliminary?”

Asked if the fusion center is investigating how information that was uncorroborated and was based on false assumptions got into a distributed report, spokeswoman Bond said an investigation of that sort is the responsibility of DHS and the other agencies who compiled the report. The center’s focus, she said, was on how Weiss received a copy of the report that he should never have received.

“We’re very concerned about the leak of controlled information,” Bond said. “Our internal review is looking at how did this information get passed along, confidential or controlled information, get disseminated and put into the hands of users that are not approved to receive that information. That’s number one.”

Notice that the problem isn’t that a non-existent threat was over hyped in a report circulated in secret, but that the report became public. Never mind that if the report hadn’t become public, the report would have never been revealed as erroneous. How many other reports like this are being used to justify policies that are as erroneous as the data that supports them?

Posted on November 21, 2011 at 6:57 AMView Comments

Remotely Opening Prison Doors

This seems like a bad vulnerability:

Researchers have demonstrated a vulnerability in the computer systems used to control facilities at federal prisons that could allow an outsider to remotely take them over, doing everything from opening and overloading cell door mechanisms to shutting down internal communications systems.

[…]

The researchers began their work after Strauchs was called in by a warden to investigate an incident in which all the cell doors on one prison’s death row spontaneously opened. While the computers that are used for the system control and data acquisition (SCADA) systems that control prison doors and other systems in theory should not be connected to the Internet, the researchers found that there was an Internet connection associated with every prison system they surveyed. In some cases, prison staff used the same computers to browse the Internet; in others, the companies that had installed the software had put connections in place to do remote maintenance on the systems.

The weirdest part of the article was this last paragraph.

“You could open every cell door, and the system would be telling the control room they are all closed,” Strauchs, a former CIA operations officer, told the Times. He said that he thought the greatest threat was that the system would be used to create the conditions needed for the assassination of a target prisoner.

I guess that’s a threat. But the greatest threat?

EDITED TO ADD (11/14): The original paper.

Posted on November 14, 2011 at 7:14 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.