Entries Tagged "war"

Page 3 of 6

Terrorist Havens

Good essay on “terrorist havens”—like Afghanistan—and why they’re not as big a worry as some maintain:

Rationales for maintaining the counterinsurgency in Afghanistan are varied and complex, but they all center on one key tenet: that Afghanistan must not be allowed to again become a haven for terrorist groups, especially al-Qaeda.

[…]

The debate has largely overlooked a more basic question: How important to terrorist groups is any physical haven? More to the point: How much does a haven affect the danger of terrorist attacks against U.S. interests, especially the U.S. homeland? The answer to the second question is: not nearly as much as unstated assumptions underlying the current debate seem to suppose. When a group has a haven, it will use it for such purposes as basic training of recruits. But the operations most important to future terrorist attacks do not need such a home, and few recruits are required for even very deadly terrorism. Consider: The preparations most important to the Sept. 11, 2001, attacks took place not in training camps in Afghanistan but, rather, in apartments in Germany, hotel rooms in Spain and flight schools in the United States.

In the past couple of decades, international terrorist groups have thrived by exploiting globalization and information technology, which has lessened their dependence on physical havens.

By utilizing networks such as the Internet, terrorists’ organizations have become more network-like, not beholden to any one headquarters. A significant jihadist terrorist threat to the United States persists, but that does not mean it will consist of attacks instigated and commanded from a South Asian haven, or that it will require a haven at all. Al-Qaeda’s role in that threat is now less one of commander than of ideological lodestar, and for that role a haven is almost meaningless.

Posted on September 21, 2009 at 6:46 AMView Comments

David Kilcullen on Security and Insurgency

Very interesting hour-long interview.

Australian-born David Kilcullen was the senior advisor to US General David Petraeus during his time in Iraq, advising on counterinsurgency. The implementation of his strategies are now regarded as a major turning point in the war.

Here, in a fascinating discussion with human rights lawyer Julian Burnside at the Melbourne Writers’ Festival, he talks about the ethics and tactics of contemporary warfare.

Posted on September 7, 2009 at 7:33 AMView Comments

Yet Another New York Times Cyberwar Article

It’s the season, I guess:

The United States has no clear military policy about how the nation might respond to a cyberattack on its communications, financial or power networks, a panel of scientists and policy advisers warned Wednesday, and the country needs to clarify both its offensive capabilities and how it would respond to such attacks.

The report, based on a three-year study by a panel assembled by the National Academy of Sciences, is the first major effort to look at the military use of computer technologies as weapons. The potential use of such technologies offensively has been widely discussed in recent years, and disruptions of communications systems and Web sites have become a standard occurrence in both political and military conflicts since 2000.

Here’s the report summary, which I have not read yet.

I was particularly disturbed by the last paragraph of the newspaper article:

Introducing the possibility of a nuclear response to a catastrophic cyberattack would be expected to serve the same purpose.

Nuclear war is not a suitable response to a cyberattack.

Posted on May 1, 2009 at 10:46 AMView Comments

History and Ethics of Military Robots

This article gives an overview of U.S. military robots, and discusses a bit around the issues regarding their use in war:

As military robots gain more and more autonomy, the ethical questions involved will become even more complex. The U.S. military bends over backwards to figure out when it is appropriate to engage the enemy and how to limit civilian casualties. Autonomous robots could, in theory, follow the rules of engagement; they could be programmed with a list of criteria for determining appropriate targets and when shooting is permissible. The robot might be programmed to require human input if any civilians were detected. An example of such a list at work might go as follows: “Is the target a Soviet-made T-80 tank? Identification confirmed. Is the target located in an authorized free-fire zone? Location confirmed. Are there any friendly units within a 200-meter radius? No friendlies detected. Are there any civilians within a 200-meter radius? No civilians detected. Weapons release authorized. No human command authority required.”

Such an “ethical” killing machine, though, may not prove so simple in the reality of war. Even if a robot has software that follows all the various rules of engagement, and even if it were somehow absolutely free of software bugs and hardware failures (a big assumption), the very question of figuring out who an enemy is in the first place—that is, whether a target should even be considered for the list of screening questions—is extremely complicated in modern war. It essentially is a judgment call. It becomes further complicated as the enemy adapts, changes his conduct, and even hides among civilians. If an enemy is hiding behind a child, is it okay to shoot or not? Or what if an enemy is plotting an attack but has not yet carried it out? Politicians, pundits, and lawyers can fill pages arguing these points. It is unreasonable to expect robots to find them any easier.

The legal questions related to autonomous systems are also extremely sticky. In 2002, for example, an Air National Guard pilot in an F-16 saw flashing lights underneath him while flying over Afghanistan at twenty-three thousand feet and thought he was under fire from insurgents. Without getting required permission from his commanders, he dropped a 500-pound bomb on the lights. They instead turned out to be troops from Canada on a night training mission. Four were killed and eight wounded. In the hearings that followed, the pilot blamed the ubiquitous “fog of war” for his mistake. It didn’t matter and he was found guilty of dereliction of duty.

Change this scenario to an unmanned system and military lawyers aren’t sure what to do. Asks a Navy officer, “If these same Canadian forces had been attacked by an autonomous UCAV, determining who is accountable proves difficult. Would accountability lie with the civilian software programmers who wrote the faulty target identification software, the UCAV squadron’s Commanding Officer, or the Combatant Commander who authorized the operational use of the UCAV? Or are they collectively held responsible and accountable?”

The article was adapted from his book Wired for War: The Robotics Revolution and Conflict in the 21st Century, published this year. I bought the book, but I have not read it yet.

Related is this paper on the ethics of autonomous military robots.

Posted on March 9, 2009 at 6:59 AMView Comments

Electromagnetic Pulse Grenades

There are rumors of a prototype:

Even the highly advanced US forces hadn’t been generally thought to have developed a successful pulse-bomb yet, with most reports indicating that such a capability remains a few years off (as has been the case for decades). Furthermore, the pulse ordnance has usually been seen as large and heavy, in the same league as an aircraft bomb or cruise missile warhead—or in the case of an HPM raygun, of a weapons-pod or aircraft payload size.

Now, however, it appears that in fact the US military has already managed to get the coveted pulse-bomb tech down to grenade size. Colonel Buckhout apparently envisages the Army electronic warfare troopers of tomorrow lobbing a pulse grenade through the window of an enemy command post or similar, so knocking out all their comms.

Posted on February 26, 2009 at 6:48 AMView Comments

Snipers

Really interesting article on snipers:

It might be because there’s another side to snipers and sniping after all. In particular, even though a sniper will often be personally responsible for huge numbers of deaths—body counts in the hundreds for an individual shooter are far from unheard of—as a class snipers kill relatively few people compared to the effects they achieve. Furthermore, when a sniper kills someone, it is almost always a person they meant to kill, not just someone standing around in the wrong place and time. These are not things that most branches of the military can say.

But, for a well-trained military sniper at least, “collateral damage”—the accidental killing and injuring of bystanders and unintended targets—is almost nonexistent. Mistakes do occur, but compared to a platoon of regular soldiers armed with automatic weapons, rockets, grenades etc a sniper is delicacy itself. Compared to crew-served and vehicle weapons, artillery, tanks, air support or missile strikes, a sniper is not just surgically precise but almost magically so. Yet he (or sometimes she) is reviled as the next thing to a murderer, while the mainstream mass slaughter people are seen as relatively normal.

Consider the team who put a strike jet into the air: a couple of aircrew, technicians, armourers, planners, their supporting cooks and medics and security and supply people. Perhaps fifty or sixty people, then, who together send up a plane which can deliver a huge load of bombs at least twice a day. Almost every week in Afghanistan and Iraq right now, such bombs are dropped. The nature of heavy ordnance being what it is, these bombs kill and maim not just their targets (assuming there is a correctly-located target) but everyone else around. Civilian deaths in air strikes are becoming a massive issue for NATO and coalition troops in Afghanistan.

Those sixty people, in a busy week, could easily put hundreds of tons of munitions into a battlefield—an amount of destructive power approaching that of a small nuclear weapon. This kind of firepower can and will kill many times more people than sixty snipers could in the same time span – and many of the dead will typically be innocent bystanders, often including children and the elderly. Such things are happening, on longer timescales, as this article is written. Furthermore, all these bomber people—even the aircrew—run significantly less personal risk than snipers do.

But nobody thinks of a bomb armourer, or a “fighter” pilot”, or a base cook as a cowardly assassin. Their efforts are at least as deadly per capita, they run less personal risks, but they’re just doing their jobs. And let’s not forget everyone else: artillerymen, tank crews, machine gunners. Nobody particularly loathes them, or considers them cowardly assassins.

Posted on December 16, 2008 at 6:25 AMView Comments

Killing Robot Being Tested by Lockheed Martin

Wow:

The frightening, but fascinatingly cool hovering robot – MKV (Multiple Kill Vehicle), is designed to shoot down enemy ballistic missiles.

A video released by the Missile Defense Agency (MDA) shows the MKV being tested at the National Hover Test Facility at Edwards Air Force Base, in California.

Inside a large steel cage, Lockheed’s MKV lifts off the ground, moves left and right, rapidly firing as flames shoot out of its bottom and sides. This description doesn’t do it any justice really, you have to see the video yourself.

During the test, the MKV is shown to lift off under its own propulsion, and remains stationary, using it’s on board retro-rockets. The potential of this drone is nothing short of science-fiction.

When watching the video, you can’t help but be reminded of post-apocalyptic killing machines, seen in such films as The Terminator and The Matrix.

Okay, people. Now is the time to start discussing the rules of war for autonomous robots. Now, when it’s still theoretical.

Posted on December 15, 2008 at 6:07 AMView Comments

Barack Obama Discusses Security Trade-Offs

I generally avoid commenting on election politics—that’s not what this blog is about—but this comment by Barack Obama is worth discussing:

[Q] I have been collecting accounts of your meeting with David Petraeus in Baghdad. And you had [inaudible] after he had made a really strong pitch [inaudible] for maximum flexibility. A lot of politicians at that moment would have said [inaudible] but from what I hear, you pushed back.

[BO] I did. I remember the conversation, pretty precisely. He made the case for maximum flexibility and I said you know what if I were in your shoes I would be making the exact same argument because your job right now is to succeed in Iraq on as favorable terms as we can get. My job as a potential commander in chief is to view your counsel and your interests through the prism of our overall national security which includes what is happening in Afghanistan, which includes the costs to our image in the middle east, to the continued occupation, which includes the financial costs of our occupation, which includes what it is doing to our military. So I said look, I described in my mind at list an analogous situation where I am sure he has to deal with situations where the commanding officer in [inaudible] says I need more troops here now because I really think I can make progress doing x y and z. That commanding officer is doing his job in Ramadi, but Petraeus’s job is to step back and see how does it impact Iraq as a whole. My argument was I have got to do the same thing here. And based on my strong assessment particularly having just come from Afghanistan were going to have to make a different decision. But the point is that hopefully I communicated to the press my complete respect and gratitude to him and Proder who was in the meeting for their outstanding work. Our differences don’t necessarily derive from differences in sort of, or my differences with him don’t derive from tactical objections to his approach. But rather from a strategic framework that is trying to take into account the challenges to our national security and the fact that we’ve got finite resources.

I have made this general point again and again—about airline security, about terrorism, about a lot of things—that the person in charge of the security system can’t be the person who decides what resources to devote to that security system. The analogy I like to use is a company: the VP of marketing wants all the money for marketing, the VP of engineering wants all the money for engineering, and so on; and the CEO has to balance all of those needs and do what’s right for the company. So of course the TSA wants to spend all this money on new airplane security systems; that’s their job. Someone above the TSA has to balance the risks to airlines with the other risks our country faces and allocate budget accordingly. Security is a trade-off, and that trade-off has to be made by someone with responsibility over all aspects of that trade-off.

I don’t think I’ve ever heard a politician make this point so explicitly.

EDITED TO ADD (10/27): This is a security blog, not a political blog. As such, I have deleted all political comments below—on both sides.. You are welcome to discuss this notion of security trade-offs and the appropriate level to make them, but not the election or the candidates.

Posted on October 27, 2008 at 6:31 AMView Comments

Taleb on the Limitations of Risk Management

Nice paragraph on the limitations of risk management in this occasionally interesting interview with Nicholas Taleb:

Because then you get a Maginot Line problem. [After World War I, the French erected concrete fortifications to prevent Germany from invading again—a response to the previous war, which proved ineffective for the next one.] You know, they make sure they solve that particular problem, the Germans will not invade from here. The thing you have to be aware of most obviously is scenario planning, because typically if you talk about scenarios, you’ll overestimate the probability of these scenarios. If you examine them at the expense of those you don’t examine, sometimes it has left a lot of people worse off, so scenario planning can be bad. I’ll just take my track record. Those who did scenario planning have not fared better than those who did not do scenario planning. A lot of people have done some kind of “make-sense” type measures, and that has made them more vulnerable because they give the illusion of having done your job. This is the problem with risk management. I always come back to a classical question. Don’t give a fool the illusion of risk management. Don’t ask someone to guess the number of dentists in Manhattan after asking him the last four digits of his Social Security number. The numbers will always be correlated. I actually did some work on risk management, to show how stupid we are when it comes to risk.

Posted on October 3, 2008 at 7:48 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.