Entries Tagged "cyberwar"

Page 7 of 15

Gary McGraw on National Cybersecurity

Good essay, making the point that cyberattack and counterattack aren’t very useful—actual cyberdefense is what’s wanted.

Creating a cyber-rock is cheap. Buying a cyber-rock is even cheaper since zero-day attacks exist on the open market for sale to the highest bidder. In fact, if the bad guy is willing to invest time rather than dollars and become an insider, cyber-rocks may in fact be free of charge, but that is a topic for another time.

Given these price tags, it is safe to assume that some nations have already developed a collection of cyber-rocks, and that many other nations will develop a handful of specialized cyber-rocks (e.g., as an extension of many-year-old regional conflicts). If we follow the advice of Hayden and Chabinsky, we may even distribute cyber-rocks to private corporations.

Obviously, active defense is folly if all it means is unleashing the cyber-rocks from inside of our glass houses since everyone can or will have cyber-rocks. Even worse, unlike very high explosives, or nuclear materials, or other easily trackable munitions (part of whose deterrence value lies in others knowing about them), no one will ever know just how many or what kind of cyber-rocks a particular group actually has.

Now that we have established that cyber-offense is relatively easy and can be accomplished on the cheap, we can see why reliance on offense alone is inadvisable. What are we going to do to stop cyberwar from starting in the first place? The good news is that war has both defensive and offensive aspects, and understanding this fundamental dynamic is central to understanding cyberwar and deterrence.

The kind of defense I advocate (called “passive defense” or “protection” above) involves security engineering—building security in as we create our systems, knowing full well that they will be attacked in the future. One of the problems to overcome is that exploits are sexy and engineering is, well, not so sexy.

Posted on November 8, 2012 at 1:24 PMView Comments

Stoking Cyber Fears

A lot of the debate around President Obama’s cybsersecurity initiative centers on how much of a burden it would be on industry, and how that should be financed. As important as that debate is, it obscures some of the larger issues surrounding cyberwar, cyberterrorism, and cybersecurity in general.

It’s difficult to have any serious policy discussion amongst the fear mongering. Secretary Panetta’s recent comments are just the latest; search the Internet for “cyber 9/11,” “cyber Pearl-Harbor,” “cyber Katrina,” or—my favorite—”cyber Armageddon.”

There’s an enormous amount of money and power that results from pushing cyberwar and cyberterrorism: power within the military, the Department of Homeland Security, and the Justice Department; and lucrative government contracts supporting those organizations. As long as cyber remains a prefix that scares, it’ll continue to be used as a bugaboo.

But while scare stories are more movie-plot than actual threat, there are real risks. The government is continually poked and probed in cyberspace, from attackers ranging from kids playing politics to sophisticated national intelligence gathering operations. Hackers can do damage, although nothing like the cyberterrorism rhetoric would lead you to believe. Cybercrime continues to rise, and still poses real risks to those of us who work, shop, and play on the Internet. And cyberdefense needs to be part of our military strategy.

Industry has definitely not done enough to protect our nation’s critical infrastructure, and federal government may need more involvement. This should come as no surprise; the economic externalities in cybersecurity are so great that even the freest free market would fail.

For example, the owner of a chemical plant will protect that plant from cyber attack up to the value of that plant to the owner; the residual risk to the community around the plant will remain. Politics will color how government involvement looks: market incentives, regulation, or outright government takeover of some aspects of cybersecurity.

None of this requires heavy-handed regulation. Over the past few years we’ve heard calls for the military to better control Internet protocols; for the United States to be able to “kill” all or part of the Internet, or to cut itself off from the greater Internet; for increased government surveillance; and for limits on anonymity. All of those would be dangerous, and would make us less secure. The world’s first military cyberweapon, Stuxnet, was used by the United States and Israel against Iran.

In all of this government posturing about cybersecurity, the biggest risk is a cyber-war arms race; and that’s where remarks like Panetta’s lead us. Increased government spending on cyberweapons and cyberdefense, and an increased militarization of cyberspace, is both expensive and destabilizing. Fears lead to weapons buildups, and weapons beg to be used.

I would like to see less fear mongering, and more reasoned discussion about the actual threats and reasonable countermeasures. Pushing the fear button benefits no one.

This essay originally appeared in the New York Times “Room for Debate” blog. Here are the other essays on the topic.

Posted on October 19, 2012 at 7:45 AMView Comments

Another Stuxnet Post

Larry Constantine disputes David Sanger’s book about Stuxnet:

So, what did he get wrong? First of all, the Stuxnet worm did not escape into the wild. The analysis of initial infections and propagations by Symantec show that, in fact, that it never was widespread, that it affected computers in closely connected clusters, all of which involved collaborators or companies that had dealings with each other. Secondly, it couldn’t have escaped over the Internet, as Sanger’s account maintains, because it never had that capability built into it: It can only propagate over [a] local-area network, over removable media such as CDs, DVDs, or USB thumb drives. So it was never capable of spreading widely, and in fact the sequence of infections is always connected by a close chain. Another thing that Sanger got wrong … was the notion that the worm escaped when an engineer connected his computer to the PLCs that were controlling the centrifuges and his computer became infected, which then later spread over the Internet. This is also patently impossible because the software that was resident on the PLCs is the payload that directly deals with the centrifuge motors; it does not have the capability of infecting a computer because it doesn’t have any copy of the rest of the Stuxnet system, so that part of the story is simply impossible. In addition, the explanation offered in his book and in his article is that Stuxnet escaped because of an error in the code, with the Americans claiming it was the Israelis’ fault that suddenly allowed it to get onto the Internet because it no longer recognized its environment. Anybody who works in the field knows that this doesn’t quite make sense, but in fact the last version, the last revision to Stuxnet, according to Symantec, had been in March, and it wasn’t discovered until June 17. And in fact the mode of discovery had nothing to do with its being widespread in the wild because in fact it was discovered inside computers in Iran that were being supported by a Belarus antivirus company called VirusBlokAda.

EDITED TO ADD (9/14): Comment from Larry Constantine.

Posted on September 10, 2012 at 6:51 AMView Comments

The Failure of Anti-Virus Companies to Catch Military Malware

Mikko Hypponen of F-Secure attempts to explain why anti-virus companies didn’t catch Stuxnet, DuQu, and Flame:

When we went digging through our archive for related samples of malware, we were surprised to find that we already had samples of Flame, dating back to 2010 and 2011, that we were unaware we possessed. They had come through automated reporting mechanisms, but had never been flagged by the system as something we should examine closely. Researchers at other antivirus firms have found evidence that they received samples of the malware even earlier than this, indicating that the malware was older than 2010.

What this means is that all of us had missed detecting this malware for two years, or more. That’s a spectacular failure for our company, and for the antivirus industry in general.

It wasn’t the first time this has happened, either. Stuxnet went undetected for more than a year after it was unleashed in the wild, and was only discovered after an antivirus firm in Belarus was called in to look at machines in Iran that were having problems. When researchers dug back through their archives for anything similar to Stuxnet, they found that a zero-day exploit that was used in Stuxnet had been used before with another piece of malware, but had never been noticed at the time. A related malware called DuQu also went undetected by antivirus firms for over a year.

Stuxnet, Duqu and Flame are not normal, everyday malware, of course. All three of them were most likely developed by a Western intelligence agency as part of covert operations that weren’t meant to be discovered.

His conclusion is simply that the attackers—in this case, military intelligence agencies—are simply better than commercial-grade anti-virus programs.

The truth is, consumer-grade antivirus products can’t protect against targeted malware created by well-resourced nation-states with bulging budgets. They can protect you against run-of-the-mill malware: banking trojans, keystroke loggers and e-mail worms. But targeted attacks like these go to great lengths to avoid antivirus products on purpose. And the zero-day exploits used in these attacks are unknown to antivirus companies by definition. As far as we can tell, before releasing their malicious codes to attack victims, the attackers tested them against all of the relevant antivirus products on the market to make sure that the malware wouldn’t be detected. They have unlimited time to perfect their attacks. It’s not a fair war between the attackers and the defenders when the attackers have access to our weapons.

We really should have been able to do better. But we didn’t. We were out of our league, in our own game.

I don’t buy this. It isn’t just the military that tests its malware against commercial defense products; criminals do it, too. Virus and worm writers do it. Spam writers do it. This is the never-ending arms race between attacker and defender, and it’s been going on for decades. Probably the people who wrote Flame had a larger budget than a large-scale criminal organization, but their evasive techniques weren’t magically better. Note that F-Secure and others had samples of Flame; they just didn’t do anything about them.

I think the difference has more to do with the ways in which these military malware programs spread. That is, slowly and stealthily. It was never a priority to understand—and then write signatures to detect—the Flame samples because they were never considered a problem. Maybe they were classified as a one-off. Or as an anomaly. I don’t know, but it seems clear that conventional non-military malware writers who want to evade detection should adopt the propagation techniques of Flame, Stuxnet, and DuQu.

EDITED TO ADD (6/23): F-Secure responded. Unfortunately, it’s not a very substantive response. It’s a pity; I think there’s an interesting discussion to be had about why the anti-virus companies all missed Flame for so long.

Posted on June 19, 2012 at 7:11 AMView Comments

Cyberwar Treaties

We’re in the early years of a cyberwar arms race. It’s expensive, it’s destabilizing, and it threatens the very fabric of the Internet we use every day. Cyberwar treaties, as imperfect as they might be, are the only way to contain the threat.

If you read the press and listen to government leaders, we’re already in the middle of a cyberwar. By any normal definition of the word “war,” this is ridiculous. But the definition of cyberwar has been expanded to include government-sponsored espionage, potential terrorist attacks in cyberspace, large-scale criminal fraud, and even hacker kids attacking government networks and critical infrastructure. This definition is being pushed both by the military and by government contractors, who are gaining power and making money on cyberwar fear.

The danger is that military problems beg for military solutions. We’re starting to see a power grab in cyberspace by the world’s militaries: large-scale monitoring of networks, military control of Internet standards, even military takeover of cyberspace. Last year’s debate over an “Internet kill switch” is an example of this; it’s the sort of measure that might be deployed in wartime but makes no sense in peacetime. At the same time, countries are engaging in offensive actions in cyberspace, with tools like Stuxnet and Flame.

Arms races stem from ignorance and fear: ignorance of the other side’s capabilities, and fear that their capabilities are greater than yours. Once cyberweapons exist, there will be an impetus to use them. Both Stuxnet and Flame damaged networks other than their intended targets. Any military-inserted back doors in Internet systems make us more vulnerable to criminals and hackers. And it is only a matter of time before something big happens, perhaps by the rash actions of a low-level military officer, perhaps by a non-state actor, perhaps by accident. And if the target nation retaliates, we could find ourselves in a real cyberwar.

The cyberwar arms race is destabilizing.

International cooperation and treaties are the only way to reverse this. Banning cyberweapons entirely is a good goal, but almost certainly unachievable. More likely are treaties that stipulate a no-first-use policy, outlaw unaimed or broadly targeted weapons, and mandate weapons that self-destruct at the end of hostilities. Treaties that restrict tactics and limit stockpiles could be a next step. We could prohibit cyberattacks against civilian infrastructure; international banking, for example, could be declared off-limits.

Yes, enforcement will be difficult. Remember how easy it was to hide a chemical weapons facility? Hiding a cyberweapons facility will be even easier. But we’ve learned a lot from our Cold War experience in negotiating nuclear, chemical, and biological treaties. The very act of negotiating limits the arms race and paves the way to peace. And even if they’re breached, the world is safer because the treaties exist.

There’s a common belief within the U.S. military that cyberweapons treaties are not in our best interest: that we currently have a military advantage in cyberspace that we should not squander. That’s not true. We might have an offensive advantage­although that’s debatable­but we certainly don’t have a defensive advantage. More importantly, as a heavily networked country, we are inherently vulnerable in cyberspace.

Cyberspace threats are real. Military threats might get the publicity, but the criminal threats are both more dangerous and more damaging. Militarizing cyberspace will do more harm than good. The value of a free and open Internet is enormous.

Stop cyberwar fear mongering. Ratchet down cyberspace saber rattling. Start negotiations on limiting the militarization of cyberspace and increasing international police cooperation. This won’t magically make us safe, but it will make us safer.

This essay first appeared on the U.S. News and World Report website, as part of a series of essays on the question: “Should there be an international treaty on cyberwarfare?”

Posted on June 14, 2012 at 6:40 AMView Comments

Backdoor Found (Maybe) in Chinese-Made Military Silicon Chips

We all knew this was possible, but researchers have found the exploit in the wild:

Claims were made by the intelligence agencies around the world, from MI5, NSA and IARPA, that silicon chips could be infected. We developed breakthrough silicon chip scanning technology to investigate these claims. We chose an American military chip that is highly secure with sophisticated encryption standard, manufactured in China. Our aim was to perform advanced code breaking and to see if there were any unexpected features on the chip. We scanned the silicon chip in an affordable time and found a previously unknown backdoor inserted by the manufacturer. This backdoor has a key, which we were able to extract. If you use this key you can disable the chip or reprogram it at will, even if locked by the user with their own key. This particular chip is prevalent in many systems from weapons, nuclear power plants to public transport. In other words, this backdoor access could be turned into an advanced Stuxnet weapon to attack potentially millions of systems. The scale and range of possible attacks has huge implications for National Security and public infrastructure.

Here’s the draft paper:

Abstract. This paper is a short summary of the first real world detection of a backdoor in a military grade FPGA. Using an innovative patented technique we were able to detect and analyse in the first documented case of its kind, a backdoor inserted into the Actel/Microsemi ProASIC3 chips. The backdoor was found to exist on the silicon itself, it was not present in any firmware loaded onto the chip. Using Pipeline Emission Analysis (PEA), a technique pioneered by our sponsor, we were able to extract the secret key to activate the backdoor. This way an attacker can disable all the security on the chip, reprogram crypto and access keys, modify low-level silicon features, access unencrypted configuration bitstream or permanently damage the device. Clearly this means the device is wide open to intellectual property theft, fraud, re-programming as well as reverse engineering of the design which allows the introduction of a new backdoor or Trojan. Most concerning, it is not possible to patch the backdoor in chips already deployed, meaning those using this family of chips have to accept the fact it can be easily compromised or it will have to be physically replaced after a redesign of the silicon itself.

The chip in question was designed in the U.S. by a U.S. company, but manufactured in China. News stories. Comment threads.

One researcher maintains that this is not malicious:

Backdoors are a common problem in software. About 20% of home routers have a backdoor in them, and 50% of industrial control computers have a backdoor. The cause of these backdoors isn’t malicious, but a byproduct of software complexity. Systems need to be debugged before being shipped to customers. Therefore, the software contains debuggers. Often, programmers forget to disable the debugger backdoors before shipping. This problem is notoriously bad for all embedded operating systems (VxWorks, QNX, WinCE, etc.).

[…]

It could just be part of the original JTAG building-block. Actel didn’t design their own, but instead purchased the JTAG design and placed it on their chips. They are not aware of precisely all the functionality in that JTAG block, or how it might interact with the rest of the system.

But I’m betting that Microsemi/Actel know about the functionality, but thought of it as a debug feature, rather than a backdoor.

It’s remotely possible that the Chinese manufacturer added the functionality, but highly improbable. It’s prohibitively difficult to change a chip design to add functionality of this complexity. On the other hand, it’s easy for a manufacturer to flip bits. Consider that the functionality is part of the design, but that Actel intended to disable it by flipping a bit turning it off. A manufacturer could easily flip a bit and turn it back on again. In other words, it’s extraordinarily difficult to add complex new functionality, but they may get lucky and be able to make small tweaks to accomplish their goals.

EDITED TO ADD (5/29): Two more articles.

EDITED TO ADD (6/8): Three more articles.

EDITED TO ADD (6/10): A response from the chip manufacturer.

The researchers assertion is that with the discovery of a security key, a hacker can gain access to a privileged internal test facility typically reserved for initial factory testing and failure analysis. Microsemi verifies that the internal test facility is disabled for all shipped devices. The internal test mode can only be entered in a customer-programmed device when the customer supplies their passcode, thus preventing unauthorized access by Microsemi or anyone else. In addition, Microsemi’s customers who are concerned about the possibility of a hacker using DPA have the ability to program their FPGAs with its highest level of security settings. This security setting will disable the use of any type of passcode to gain access to all device configuration, including the internal test facility.

A response from the researchers.

In order to gain access to the backdoor and other features a special key is required. This key has very robust DPA protection, in fact, one of the best silicon-level protections we have ever encountered. With our breakthrough PEA technique we extracted the key in one day and we found that the key is the same in all ProASIC3, Igloo, Fusion and SmartFusion FPGAs. Customers have an option to program their chosen passcode to increase the security; however, Actel/Microsemi does not tell its customers that a special fuse must be programmed in order to get the backdoor protected with both the passcode and backdoor keys. At the same time, the passcode key can be extracted with our PEA technique which is public and covered in our patent so everyone can independently verify our claims. That means that given physical access to the device an attacker can extract all the embedded IP within hours.

There is an option for the highest level of security settings – Permanent Lock. However, if the AES reprogramming option is left it still exposes the device to IP stealing. If not, the Permanent Lock itself is vulnerable to fault attacks and can be disabled opening up the path to the backdoor access as before, but without the need for any passcode.

Posted on May 29, 2012 at 2:07 PMView Comments

1 5 6 7 8 9 15

Sidebar photo of Bruce Schneier by Joe MacInnis.