Entries Tagged "military"

Page 4 of 16

Switzerland National Defense

Interesting blog post about this book about Switzerland’s national defense.

To make a long story short, McPhee describes two things: how Switzerland requires military service from every able-bodied male Swiss citizen—a model later emulated and expanded by Israel—and how the Swiss military has, in effect, wired the entire country to blow in the event of foreign invasion. To keep enemy armies out, bridges will be dynamited and, whenever possible, deliberately collapsed onto other roads and bridges below; hills have been weaponized to be activated as valley-sweeping artificial landslides; mountain tunnels will be sealed from within to act as nuclear-proof air raid shelters; and much more.

[…]

To interrupt the utility of bridges, tunnels, highways, railroads, Switzerland has established three thousand points of demolition. That is the number officially printed. It has been suggested to me that to approximate a true figure a reader ought to multiply by two. Where a highway bridge crosses a railroad, a segment of the bridge is programmed to drop on the railroad. Primacord fuses are built into the bridge. Hidden artillery is in place on either side, set to prevent the enemy from clearing or repairing the damage.

Further:

Near the German border of Switzerland, every railroad and highway tunnel has been prepared to pinch shut explosively. Nearby mountains have been made so porous that whole divisions can fit inside them. There are weapons and soldiers under barns. There are cannons inside pretty houses. Where Swiss highways happen to run on narrow ground between the edges of lakes and to the bottoms of cliffs, man-made rockslides are ready to slide.

[…]

McPhee points to small moments of “fake stonework, concealing the artillery behind it,” that dot Switzerland’s Alpine geology, little doors that will pop open to reveal internal cannons and blast the country’s roads to smithereens. Later, passing under a mountain bridge, McPhee notices “small steel doors in one pier” hinting that the bridge “was ready to blow. It had been superceded, however, by an even higher bridge, which leaped through the sky above—a part of the new road to Simplon. In an extreme emergency, the midspan of the new bridge would no doubt drop on the old one.”

The book is on my Kindle.

Posted on June 20, 2012 at 7:27 AMView Comments

The Failure of Anti-Virus Companies to Catch Military Malware

Mikko Hypponen of F-Secure attempts to explain why anti-virus companies didn’t catch Stuxnet, DuQu, and Flame:

When we went digging through our archive for related samples of malware, we were surprised to find that we already had samples of Flame, dating back to 2010 and 2011, that we were unaware we possessed. They had come through automated reporting mechanisms, but had never been flagged by the system as something we should examine closely. Researchers at other antivirus firms have found evidence that they received samples of the malware even earlier than this, indicating that the malware was older than 2010.

What this means is that all of us had missed detecting this malware for two years, or more. That’s a spectacular failure for our company, and for the antivirus industry in general.

It wasn’t the first time this has happened, either. Stuxnet went undetected for more than a year after it was unleashed in the wild, and was only discovered after an antivirus firm in Belarus was called in to look at machines in Iran that were having problems. When researchers dug back through their archives for anything similar to Stuxnet, they found that a zero-day exploit that was used in Stuxnet had been used before with another piece of malware, but had never been noticed at the time. A related malware called DuQu also went undetected by antivirus firms for over a year.

Stuxnet, Duqu and Flame are not normal, everyday malware, of course. All three of them were most likely developed by a Western intelligence agency as part of covert operations that weren’t meant to be discovered.

His conclusion is simply that the attackers—in this case, military intelligence agencies—are simply better than commercial-grade anti-virus programs.

The truth is, consumer-grade antivirus products can’t protect against targeted malware created by well-resourced nation-states with bulging budgets. They can protect you against run-of-the-mill malware: banking trojans, keystroke loggers and e-mail worms. But targeted attacks like these go to great lengths to avoid antivirus products on purpose. And the zero-day exploits used in these attacks are unknown to antivirus companies by definition. As far as we can tell, before releasing their malicious codes to attack victims, the attackers tested them against all of the relevant antivirus products on the market to make sure that the malware wouldn’t be detected. They have unlimited time to perfect their attacks. It’s not a fair war between the attackers and the defenders when the attackers have access to our weapons.

We really should have been able to do better. But we didn’t. We were out of our league, in our own game.

I don’t buy this. It isn’t just the military that tests its malware against commercial defense products; criminals do it, too. Virus and worm writers do it. Spam writers do it. This is the never-ending arms race between attacker and defender, and it’s been going on for decades. Probably the people who wrote Flame had a larger budget than a large-scale criminal organization, but their evasive techniques weren’t magically better. Note that F-Secure and others had samples of Flame; they just didn’t do anything about them.

I think the difference has more to do with the ways in which these military malware programs spread. That is, slowly and stealthily. It was never a priority to understand—and then write signatures to detect—the Flame samples because they were never considered a problem. Maybe they were classified as a one-off. Or as an anomaly. I don’t know, but it seems clear that conventional non-military malware writers who want to evade detection should adopt the propagation techniques of Flame, Stuxnet, and DuQu.

EDITED TO ADD (6/23): F-Secure responded. Unfortunately, it’s not a very substantive response. It’s a pity; I think there’s an interesting discussion to be had about why the anti-virus companies all missed Flame for so long.

Posted on June 19, 2012 at 7:11 AMView Comments

Backdoor Found (Maybe) in Chinese-Made Military Silicon Chips

We all knew this was possible, but researchers have found the exploit in the wild:

Claims were made by the intelligence agencies around the world, from MI5, NSA and IARPA, that silicon chips could be infected. We developed breakthrough silicon chip scanning technology to investigate these claims. We chose an American military chip that is highly secure with sophisticated encryption standard, manufactured in China. Our aim was to perform advanced code breaking and to see if there were any unexpected features on the chip. We scanned the silicon chip in an affordable time and found a previously unknown backdoor inserted by the manufacturer. This backdoor has a key, which we were able to extract. If you use this key you can disable the chip or reprogram it at will, even if locked by the user with their own key. This particular chip is prevalent in many systems from weapons, nuclear power plants to public transport. In other words, this backdoor access could be turned into an advanced Stuxnet weapon to attack potentially millions of systems. The scale and range of possible attacks has huge implications for National Security and public infrastructure.

Here’s the draft paper:

Abstract. This paper is a short summary of the first real world detection of a backdoor in a military grade FPGA. Using an innovative patented technique we were able to detect and analyse in the first documented case of its kind, a backdoor inserted into the Actel/Microsemi ProASIC3 chips. The backdoor was found to exist on the silicon itself, it was not present in any firmware loaded onto the chip. Using Pipeline Emission Analysis (PEA), a technique pioneered by our sponsor, we were able to extract the secret key to activate the backdoor. This way an attacker can disable all the security on the chip, reprogram crypto and access keys, modify low-level silicon features, access unencrypted configuration bitstream or permanently damage the device. Clearly this means the device is wide open to intellectual property theft, fraud, re-programming as well as reverse engineering of the design which allows the introduction of a new backdoor or Trojan. Most concerning, it is not possible to patch the backdoor in chips already deployed, meaning those using this family of chips have to accept the fact it can be easily compromised or it will have to be physically replaced after a redesign of the silicon itself.

The chip in question was designed in the U.S. by a U.S. company, but manufactured in China. News stories. Comment threads.

One researcher maintains that this is not malicious:

Backdoors are a common problem in software. About 20% of home routers have a backdoor in them, and 50% of industrial control computers have a backdoor. The cause of these backdoors isn’t malicious, but a byproduct of software complexity. Systems need to be debugged before being shipped to customers. Therefore, the software contains debuggers. Often, programmers forget to disable the debugger backdoors before shipping. This problem is notoriously bad for all embedded operating systems (VxWorks, QNX, WinCE, etc.).

[…]

It could just be part of the original JTAG building-block. Actel didn’t design their own, but instead purchased the JTAG design and placed it on their chips. They are not aware of precisely all the functionality in that JTAG block, or how it might interact with the rest of the system.

But I’m betting that Microsemi/Actel know about the functionality, but thought of it as a debug feature, rather than a backdoor.

It’s remotely possible that the Chinese manufacturer added the functionality, but highly improbable. It’s prohibitively difficult to change a chip design to add functionality of this complexity. On the other hand, it’s easy for a manufacturer to flip bits. Consider that the functionality is part of the design, but that Actel intended to disable it by flipping a bit turning it off. A manufacturer could easily flip a bit and turn it back on again. In other words, it’s extraordinarily difficult to add complex new functionality, but they may get lucky and be able to make small tweaks to accomplish their goals.

EDITED TO ADD (5/29): Two more articles.

EDITED TO ADD (6/8): Three more articles.

EDITED TO ADD (6/10): A response from the chip manufacturer.

The researchers assertion is that with the discovery of a security key, a hacker can gain access to a privileged internal test facility typically reserved for initial factory testing and failure analysis. Microsemi verifies that the internal test facility is disabled for all shipped devices. The internal test mode can only be entered in a customer-programmed device when the customer supplies their passcode, thus preventing unauthorized access by Microsemi or anyone else. In addition, Microsemi’s customers who are concerned about the possibility of a hacker using DPA have the ability to program their FPGAs with its highest level of security settings. This security setting will disable the use of any type of passcode to gain access to all device configuration, including the internal test facility.

A response from the researchers.

In order to gain access to the backdoor and other features a special key is required. This key has very robust DPA protection, in fact, one of the best silicon-level protections we have ever encountered. With our breakthrough PEA technique we extracted the key in one day and we found that the key is the same in all ProASIC3, Igloo, Fusion and SmartFusion FPGAs. Customers have an option to program their chosen passcode to increase the security; however, Actel/Microsemi does not tell its customers that a special fuse must be programmed in order to get the backdoor protected with both the passcode and backdoor keys. At the same time, the passcode key can be extracted with our PEA technique which is public and covered in our patent so everyone can independently verify our claims. That means that given physical access to the device an attacker can extract all the embedded IP within hours.

There is an option for the highest level of security settings – Permanent Lock. However, if the AES reprogramming option is left it still exposes the device to IP stealing. If not, the Permanent Lock itself is vulnerable to fault attacks and can be disabled opening up the path to the backdoor access as before, but without the need for any passcode.

Posted on May 29, 2012 at 2:07 PMView Comments

Naval Drones

With all the talk about airborne drones like the Predator, it’s easy to forget that drones can be in the water as well. Meet the Common Unmanned Surface Vessel (CUSV):

The boat—painted in Navy gray and with a striking resemblance to a PT boat—is 39 feet long and can reach a top speed of 28 knots. Using a modified version of the unmanned Shadow surveillance aircraft technology that logged 700,000 hours of duty in the Middle East, the boat can be controlled remotely from 10 to 12 miles away from a command station on land, at sea or in the air, Haslett said.

Farther out, it can be switched to a satellite control system, which Textron said could expand its range to 1,200 miles. The boat could be launched from virtually any large Navy vessel.

[…]

Using diesel fuel, the boat could operate for up to 72 hours without refueling, depending upon its traveling speed and the weight of equipment being carried, said Stanley DeGeus, senior business development director for AAI’s advanced systems. The fuel supply could be extended for up to a week on slow-moving reconnaissance missions, he said.

Posted on May 7, 2012 at 6:52 AMView Comments

U.S. Drones Have a Computer Virus

You’d think we would be more careful than this:

A computer virus has infected the cockpits of America’s Predator and Reaper drones, logging pilots’ every keystroke as they remotely fly missions over Afghanistan and other warzones.

[…]

“We keep wiping it off, and it keeps coming back,” says a source familiar with the network infection, one of three that told Danger Room about the virus. “We think it’s benign. But we just don’t know.”

EDITED TO ADD (10/13): No one told the IT department for two weeks.

Posted on October 10, 2011 at 6:38 AMView Comments

Insurgent Groups Exhibit Learning Curve

Interesting research:

After analyzing reams of publicly available data on casualties from Iraq, Afghanistan, Pakistan and decades of terrorist attacks, the scientists conclude that “insurgents pretty much seemed to be following a progress curve—or a learning curve—that’s very common in the manufacturing literature,” says physicist Neil Johnson of the University of Miami in Florida and lead author of the study.

Paper here.

Posted on July 12, 2011 at 7:13 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.