Entries Tagged "military"

Page 9 of 16

Yet Another New York Times Cyberwar Article

It’s the season, I guess:

The United States has no clear military policy about how the nation might respond to a cyberattack on its communications, financial or power networks, a panel of scientists and policy advisers warned Wednesday, and the country needs to clarify both its offensive capabilities and how it would respond to such attacks.

The report, based on a three-year study by a panel assembled by the National Academy of Sciences, is the first major effort to look at the military use of computer technologies as weapons. The potential use of such technologies offensively has been widely discussed in recent years, and disruptions of communications systems and Web sites have become a standard occurrence in both political and military conflicts since 2000.

Here’s the report summary, which I have not read yet.

I was particularly disturbed by the last paragraph of the newspaper article:

Introducing the possibility of a nuclear response to a catastrophic cyberattack would be expected to serve the same purpose.

Nuclear war is not a suitable response to a cyberattack.

Posted on May 1, 2009 at 10:46 AMView Comments

Preparing for Cyberwar

Interesting article from The New York Times.

Because so many aspects of the American effort to develop cyberweapons and define their proper use remain classified, many of those officials declined to speak on the record. The White House declined several requests for interviews or to say whether Mr. Obama as a matter of policy supports or opposes the use of American cyberweapons.

The most exotic innovations under consideration would enable a Pentagon programmer to surreptitiously enter a computer server in Russia or China, for example, and destroy a “botnet”—a potentially destructive program that commandeers infected machines into a vast network that can be clandestinely controlled—before it could be unleashed in the United States.

Or American intelligence agencies could activate malicious code that is secretly embedded on computer chips when they are manufactured, enabling the United States to take command of an enemy’s computers by remote control over the Internet. That, of course, is exactly the kind of attack officials fear could be launched on American targets, often through Chinese-made chips or computer servers.

So far, however, there are no broad authorizations for American forces to engage in cyberwar. The invasion of the Qaeda computer in Iraq several years ago and the covert activity in Iran were each individually authorized by Mr. Bush. When he issued a set of classified presidential orders in January 2008 to organize and improve America’s online defenses, the administration could not agree on how to write the authorization.

I’ve written about cyberwar here.

Posted on April 30, 2009 at 2:18 PMView Comments

Hacking U.S. Military Satellites

The problem is more widespread than you might think:

First lofted into orbit in the 1970s, the FLTSATCOM bird was at the time a major advance in military communications. Their 23 channels were used by every branch of the U.S. armed forces and the White House for encrypted data and voice, typically from portable ground units that could be quickly unpacked and put to use on the battlefield.

As the original FLTSAT constellation of four satellites fell out of service, the Navy launched a more advanced UFO satellite (for Ultra High Frequency Follow-On) to replace them. Today, there are two FLTSAT and eight UFO birds in geosynchronous orbit. Navy contractors are working on a next-generation system called Mobile User Objective System beginning in September 2009.

Until then, the military is still using aging FLTSAT and UFO satellites—and so are a lot of Brazilians. While the technology on the transponders still dates from the 1970s, radio sets back on Earth have only improved and plummeted in cost—opening a cheap, efficient and illegal backdoor.

To use the satellite, pirates typically take an ordinary ham radio transmitter, which operates in the 144- to 148-MHZ range, and add a frequency doubler cobbled from coils and a varactor diode. That lets the radio stretch into the lower end of FLTSATCOM’s 292- to 317-MHz uplink range. All the gear can be bought near any truck stop for less than $500. Ads on specialized websites offer to perform the conversion for less than $100. Taught the ropes, even rough electricians can make Bolinha-ware.

Posted on April 23, 2009 at 12:30 PMView Comments

History and Ethics of Military Robots

This article gives an overview of U.S. military robots, and discusses a bit around the issues regarding their use in war:

As military robots gain more and more autonomy, the ethical questions involved will become even more complex. The U.S. military bends over backwards to figure out when it is appropriate to engage the enemy and how to limit civilian casualties. Autonomous robots could, in theory, follow the rules of engagement; they could be programmed with a list of criteria for determining appropriate targets and when shooting is permissible. The robot might be programmed to require human input if any civilians were detected. An example of such a list at work might go as follows: “Is the target a Soviet-made T-80 tank? Identification confirmed. Is the target located in an authorized free-fire zone? Location confirmed. Are there any friendly units within a 200-meter radius? No friendlies detected. Are there any civilians within a 200-meter radius? No civilians detected. Weapons release authorized. No human command authority required.”

Such an “ethical” killing machine, though, may not prove so simple in the reality of war. Even if a robot has software that follows all the various rules of engagement, and even if it were somehow absolutely free of software bugs and hardware failures (a big assumption), the very question of figuring out who an enemy is in the first place—that is, whether a target should even be considered for the list of screening questions—is extremely complicated in modern war. It essentially is a judgment call. It becomes further complicated as the enemy adapts, changes his conduct, and even hides among civilians. If an enemy is hiding behind a child, is it okay to shoot or not? Or what if an enemy is plotting an attack but has not yet carried it out? Politicians, pundits, and lawyers can fill pages arguing these points. It is unreasonable to expect robots to find them any easier.

The legal questions related to autonomous systems are also extremely sticky. In 2002, for example, an Air National Guard pilot in an F-16 saw flashing lights underneath him while flying over Afghanistan at twenty-three thousand feet and thought he was under fire from insurgents. Without getting required permission from his commanders, he dropped a 500-pound bomb on the lights. They instead turned out to be troops from Canada on a night training mission. Four were killed and eight wounded. In the hearings that followed, the pilot blamed the ubiquitous “fog of war” for his mistake. It didn’t matter and he was found guilty of dereliction of duty.

Change this scenario to an unmanned system and military lawyers aren’t sure what to do. Asks a Navy officer, “If these same Canadian forces had been attacked by an autonomous UCAV, determining who is accountable proves difficult. Would accountability lie with the civilian software programmers who wrote the faulty target identification software, the UCAV squadron’s Commanding Officer, or the Combatant Commander who authorized the operational use of the UCAV? Or are they collectively held responsible and accountable?”

The article was adapted from his book Wired for War: The Robotics Revolution and Conflict in the 21st Century, published this year. I bought the book, but I have not read it yet.

Related is this paper on the ethics of autonomous military robots.

Posted on March 9, 2009 at 6:59 AMView Comments

Electromagnetic Pulse Grenades

There are rumors of a prototype:

Even the highly advanced US forces hadn’t been generally thought to have developed a successful pulse-bomb yet, with most reports indicating that such a capability remains a few years off (as has been the case for decades). Furthermore, the pulse ordnance has usually been seen as large and heavy, in the same league as an aircraft bomb or cruise missile warhead—or in the case of an HPM raygun, of a weapons-pod or aircraft payload size.

Now, however, it appears that in fact the US military has already managed to get the coveted pulse-bomb tech down to grenade size. Colonel Buckhout apparently envisages the Army electronic warfare troopers of tomorrow lobbing a pulse grenade through the window of an enemy command post or similar, so knocking out all their comms.

Posted on February 26, 2009 at 6:48 AMView Comments

Snipers

Really interesting article on snipers:

It might be because there’s another side to snipers and sniping after all. In particular, even though a sniper will often be personally responsible for huge numbers of deaths—body counts in the hundreds for an individual shooter are far from unheard of—as a class snipers kill relatively few people compared to the effects they achieve. Furthermore, when a sniper kills someone, it is almost always a person they meant to kill, not just someone standing around in the wrong place and time. These are not things that most branches of the military can say.

But, for a well-trained military sniper at least, “collateral damage”—the accidental killing and injuring of bystanders and unintended targets—is almost nonexistent. Mistakes do occur, but compared to a platoon of regular soldiers armed with automatic weapons, rockets, grenades etc a sniper is delicacy itself. Compared to crew-served and vehicle weapons, artillery, tanks, air support or missile strikes, a sniper is not just surgically precise but almost magically so. Yet he (or sometimes she) is reviled as the next thing to a murderer, while the mainstream mass slaughter people are seen as relatively normal.

Consider the team who put a strike jet into the air: a couple of aircrew, technicians, armourers, planners, their supporting cooks and medics and security and supply people. Perhaps fifty or sixty people, then, who together send up a plane which can deliver a huge load of bombs at least twice a day. Almost every week in Afghanistan and Iraq right now, such bombs are dropped. The nature of heavy ordnance being what it is, these bombs kill and maim not just their targets (assuming there is a correctly-located target) but everyone else around. Civilian deaths in air strikes are becoming a massive issue for NATO and coalition troops in Afghanistan.

Those sixty people, in a busy week, could easily put hundreds of tons of munitions into a battlefield—an amount of destructive power approaching that of a small nuclear weapon. This kind of firepower can and will kill many times more people than sixty snipers could in the same time span – and many of the dead will typically be innocent bystanders, often including children and the elderly. Such things are happening, on longer timescales, as this article is written. Furthermore, all these bomber people—even the aircrew—run significantly less personal risk than snipers do.

But nobody thinks of a bomb armourer, or a “fighter” pilot”, or a base cook as a cowardly assassin. Their efforts are at least as deadly per capita, they run less personal risks, but they’re just doing their jobs. And let’s not forget everyone else: artillerymen, tank crews, machine gunners. Nobody particularly loathes them, or considers them cowardly assassins.

Posted on December 16, 2008 at 6:25 AMView Comments

Killing Robot Being Tested by Lockheed Martin

Wow:

The frightening, but fascinatingly cool hovering robot – MKV (Multiple Kill Vehicle), is designed to shoot down enemy ballistic missiles.

A video released by the Missile Defense Agency (MDA) shows the MKV being tested at the National Hover Test Facility at Edwards Air Force Base, in California.

Inside a large steel cage, Lockheed’s MKV lifts off the ground, moves left and right, rapidly firing as flames shoot out of its bottom and sides. This description doesn’t do it any justice really, you have to see the video yourself.

During the test, the MKV is shown to lift off under its own propulsion, and remains stationary, using it’s on board retro-rockets. The potential of this drone is nothing short of science-fiction.

When watching the video, you can’t help but be reminded of post-apocalyptic killing machines, seen in such films as The Terminator and The Matrix.

Okay, people. Now is the time to start discussing the rules of war for autonomous robots. Now, when it’s still theoretical.

Posted on December 15, 2008 at 6:07 AMView Comments

Movie-Plot Threat: Terrorists Using Twitter

No, really. (Commentary here.)

This is just ridiculous. Of course the bad guys will use all the communications tools available to the rest of us. They have to communicate, after all. They’ll also use cars, water faucets, and all-you-can-eat buffet lunches. So what?

This commentary is dead on:

Steven Aftergood, a veteran intelligence analyst at the Federation of the American Scientists, doesn’t dismiss the Army presentation out of hand. But nor does he think it’s tackling a terribly seriously threat. “Red-teaming exercises to anticipate adversary operations are fundamental. But they need to be informed by a sense of what’s realistic and important and what’s not,” he tells Danger Room. “If we have time to worry about ‘Twitter threats’ then we’re in good shape. I mean, it’s important to keep some sense of proportion.”

Posted on October 30, 2008 at 7:51 AMView Comments

Barack Obama Discusses Security Trade-Offs

I generally avoid commenting on election politics—that’s not what this blog is about—but this comment by Barack Obama is worth discussing:

[Q] I have been collecting accounts of your meeting with David Petraeus in Baghdad. And you had [inaudible] after he had made a really strong pitch [inaudible] for maximum flexibility. A lot of politicians at that moment would have said [inaudible] but from what I hear, you pushed back.

[BO] I did. I remember the conversation, pretty precisely. He made the case for maximum flexibility and I said you know what if I were in your shoes I would be making the exact same argument because your job right now is to succeed in Iraq on as favorable terms as we can get. My job as a potential commander in chief is to view your counsel and your interests through the prism of our overall national security which includes what is happening in Afghanistan, which includes the costs to our image in the middle east, to the continued occupation, which includes the financial costs of our occupation, which includes what it is doing to our military. So I said look, I described in my mind at list an analogous situation where I am sure he has to deal with situations where the commanding officer in [inaudible] says I need more troops here now because I really think I can make progress doing x y and z. That commanding officer is doing his job in Ramadi, but Petraeus’s job is to step back and see how does it impact Iraq as a whole. My argument was I have got to do the same thing here. And based on my strong assessment particularly having just come from Afghanistan were going to have to make a different decision. But the point is that hopefully I communicated to the press my complete respect and gratitude to him and Proder who was in the meeting for their outstanding work. Our differences don’t necessarily derive from differences in sort of, or my differences with him don’t derive from tactical objections to his approach. But rather from a strategic framework that is trying to take into account the challenges to our national security and the fact that we’ve got finite resources.

I have made this general point again and again—about airline security, about terrorism, about a lot of things—that the person in charge of the security system can’t be the person who decides what resources to devote to that security system. The analogy I like to use is a company: the VP of marketing wants all the money for marketing, the VP of engineering wants all the money for engineering, and so on; and the CEO has to balance all of those needs and do what’s right for the company. So of course the TSA wants to spend all this money on new airplane security systems; that’s their job. Someone above the TSA has to balance the risks to airlines with the other risks our country faces and allocate budget accordingly. Security is a trade-off, and that trade-off has to be made by someone with responsibility over all aspects of that trade-off.

I don’t think I’ve ever heard a politician make this point so explicitly.

EDITED TO ADD (10/27): This is a security blog, not a political blog. As such, I have deleted all political comments below—on both sides.. You are welcome to discuss this notion of security trade-offs and the appropriate level to make them, but not the election or the candidates.

Posted on October 27, 2008 at 6:31 AMView Comments

The Pentagon's World of Warcraft Movie-Plot Threat

In a presentation that rivals any of my movie-plot threat contest entries, a Pentagon researcher is worried that terrorists might plot using World of Warcraft:

In a presentation late last week at the Director of National Intelligence Open Source Conference in Washington, Dr. Dwight Toavs, a professor at the Pentagon-funded National Defense University, gave a bit of a primer on virtual worlds to an audience largely ignorant about what happens in these online spaces. Then he launched into a scenario, to demonstrate how a meatspace plot might be hidden by in-game chatter.

In it, two World of Warcraft players discuss a raid on the “White Keep” inside the “Stonetalon Mountains.” The major objective is to set off a “Dragon Fire spell” inside, and make off with “110 Gold and 234 Silver” in treasure. “No one will dance there for a hundred years after this spell is cast,” one player, “war_monger,” crows.

Except, in this case, the White Keep is at 1600 Pennsylvania Avenue. “Dragon Fire” is an unconventional weapon. And “110 Gold and 234 Silver” tells the plotters how to align the game’s map with one of Washington, D.C.

I don’t know why he thinks that the terrorists will use World of Warcraft and not some other online world. Or Facebook. Or Usenet. Or a chat room. Or e-mail. Or the telephone. I don’t even know why the particular form of communication is in any way important.

The article ends with this nice paragraph:

Steven Aftergood, the Federation of the American Scientists analyst who’s been following the intelligence community for years, wonders how realistic these sorts of scenarios are, really. “This concern is out there. But it has to be viewed in context. It’s the job of intelligence agencies to anticipate threats and counter them. With that orientation, they’re always going to give more weight to a particular scenario than an objective analysis would allow,” he tells Danger Room. “Could terrorists use Second Life? Sure, they can use anything. But is it a significant augmentation? That’s not obvious. It’s a scenario that an intelligence officer is duty-bound to consider. That’s all.”

My guess is still that some clever Pentagon researchers have figured out how to play World of Warcraft on the job, and they’re not giving that perk up anytime soon.

Posted on September 18, 2008 at 1:29 PMView Comments

1 7 8 9 10 11 16

Sidebar photo of Bruce Schneier by Joe MacInnis.