Killing Robot Being Tested by Lockheed Martin

Wow:

The frightening, but fascinatingly cool hovering robot – MKV (Multiple Kill Vehicle), is designed to shoot down enemy ballistic missiles.

A video released by the Missile Defense Agency (MDA) shows the MKV being tested at the National Hover Test Facility at Edwards Air Force Base, in California.

Inside a large steel cage, Lockheed’s MKV lifts off the ground, moves left and right, rapidly firing as flames shoot out of its bottom and sides. This description doesn’t do it any justice really, you have to see the video yourself.

During the test, the MKV is shown to lift off under its own propulsion, and remains stationary, using it’s on board retro-rockets. The potential of this drone is nothing short of science-fiction.

When watching the video, you can’t help but be reminded of post-apocalyptic killing machines, seen in such films as The Terminator and The Matrix.

Okay, people. Now is the time to start discussing the rules of war for autonomous robots. Now, when it’s still theoretical.

Posted on December 15, 2008 at 6:07 AM60 Comments

Comments

igloo December 15, 2008 7:03 AM

Exterminate… Exterminate…
Who is going to do the programming for these beasties? The past records of most of the defense force contractors does not give me much confidence if these daleks are let loose on ground zero and operate autonomously! No wonder it was enclosed in a strong enclosure.
Exterminate… Exterminate…

archangel December 15, 2008 7:17 AM

OK. I think this is extraordinarily prone to misinterpretation. Especially with the blog’s billing it as antipersonnel robotics. This is not a land or air vehicle. It would run out of fuel in a ridiculously short period of time. Nor is this demonstration a display of weaponry. This video is the really cool aspect of “making a vehicle that can balance on attitude jets.” This is an extra-atmospheric vehicle, “designed to shoot down enemy ballistic missiles.” It will never have to do what it does in the video, but the challenge of intelligent 0g pursuit is about as complicated.

Who here remembers the 1980s? I want you to realize that this is star-wars (Reagan, not Lucas). What you see “rapidly firing” is an array of attitude jets, not weaponry, because the damn thing is keeping attitude in a gravity field. The fact that it’s hovering on one jet rather than a down-facing array is precisely why the other jets are firing in such high-frequency pulses. Your muscles do the same.

The MKV is, as far as this information demonstrates, a platform for a bunch of these vehicles (the ones seen in the video), which are designed to seek and destroy outside of the atmosphere. This is not a “Terminator-like killer robot,” nor should it be billed as such. It would be a spectacular failure at antipersonnel field ops, as it runs out of fuel several meters beyond the hangar doors. “Kill” is slang for anything you knock out of commission, in this case chaff, decoys, and ballistic missiles. As in “If you can see it, you can hit it; if you can hit it, you can kill it.”

bsr1826 December 15, 2008 7:51 AM

“Now is the time to start discussing the rules of war for autonomous robots. Now, when it’s still theoretical.”

Don’t deploy until it’s certain they will not violate the Geneva Conventions and other highly regarded bits of international law.

Mike B December 15, 2008 7:56 AM

Regarding autonomous robots at some point there is some entity that is capable of independent thought that can also respond to an incentive structure (ie jail). It is the responsibility of the thing that doesn’t want to go to jail to follow the “rules of war”. This would generally the the human programmer in the case of non-thinking robots or the robot itself for when we develop strong AI.

As with current day terrorism we will want to avoid “Option B” which is a robot with a perverse incentive structure that engages in destructive behavior without fear of any repercussions.

The interesting scenario is “Option C” where the AI has a strong positive incentive structure, but is not bound by the failings human intelligence and is able to take the optimal action that might be irrationally unpopular with humans. I for one would welcome robot new overloads that would unilaterally replace security theatre with targeted assassinations 🙂

bsr1826 December 15, 2008 8:03 AM

“robot new overloads that would unilaterally replace security theatre with targeted assassinations”

smart bombs?

Isaac Asimov December 15, 2008 8:07 AM

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Joe December 15, 2008 8:15 AM

pfft, those 3 rules are totally broken. What if the robot has a 2nd brain where those rules dont exist.. hmm?! [/i-robot]

Thomas Frost December 15, 2008 8:20 AM

There’s no new technology in the video, really. I saw a similar test of SDI prototype hardware several years ago — hover, change position, track a fixed object. Lockheed Martin is very good at repurposing hardware (cf. the Phoenix project).

igloo December 15, 2008 8:21 AM

@Isaac Asimov

Asimov himself proposed a Zeroth law:

“A robot may not harm a human being, unless he finds a way to prove that in the final analysis, the harm done would benefit humanity in general.”

Unfortunately, the set of laws most likely to be implemented will be:

  1. A robot will not harm authorized Government personnel but will terminate intruders with extreme prejudice.
  2. A robot will obey the orders of authorized personnel except where such orders conflict with the Third Law.
  3. A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.

as proposed by David Langford ( http://en.wikipedia.org/wiki/David_Langford )

John Campbell December 15, 2008 8:35 AM

Time to have a debate between Keith Laumer and Fred Saberhagen…

Too bad Keith is already dead & buried.

“Roses are Red,
Bolos are Blue,
Berserkers Kill me…
and so, do you.”

Team America December 15, 2008 8:59 AM

If we humans actually manage to build robots that are smarter and stronger than humans, what’s the point in getting upset?

Let them breed us like chickens.

aikimark December 15, 2008 9:24 AM

“Must kill all humans” — Bender

====================
“6F6F7073” — first deployed robot killing machine after accidentally killing humans

“It was faulty intelligence.” — Pentagon spokesman explaining the ‘accident’.

“You go into space war with the robot killing machines you have, not the robot killing machines you would like to have.” — Zap Brannigan paraphrasing Donald Rumsfeld

Jeroen December 15, 2008 9:34 AM

There have been autonomous weapons for years. For example, ship-based Close-In Weapons Systems such as Phalanx and GoalKeeper can be switched to autonomous mode to automatically engage incoming anti-ship missiles.

But I agree that weapon-autonomy can/will soon reach a whole new level, which will completely change the playing field. What about a fully autonomous sentry that makes its owen friend/foe decision, and shoots-to-kill if it decides “foe”? Recognizing civillians is hard enough for humans. Robots are not going to be any better at it any time soon.

Pete December 15, 2008 9:40 AM

I watched the grainy YouTube copy of the video, and I was not convinced – the bot appeared to stay precisely at the same altitude the whole time, until it shut down – given the pulsing of the jets, I would have expected some visible shifting.

Anonymous December 15, 2008 10:08 AM

What do you mean “theoretical?” The U.S. military already has weapons like the CBU-97 sensor fused bomb. You drop this baby and uses infrared and laser systems to independently find and hit up to 40 targets.

I wouldn’t go quite as far as calling it an autonomous robot, but it makes independent decisions about what it should kill, and it’s already been used in combat.

ax0n December 15, 2008 10:10 AM

“The frightening, but fascinatingly cool hovering robot – MKV (Multiple Kill Vehicle), is designed to shoot down enemy ballistic missiles.”

I think the article missed the point. This thing is supposed to detect, track, and intercept them. It’s basically a moving target. From what I understand, A multiple-intercept weapon would carry many of these at once, towards a MIRV style multi-warhead ICBM. At that point, a flock of these MKVs would then track and block the individual warheads, which would blow up upon hitting an MKV instead of making its way to the desired surface target.

These don’t actually have any on-board weapons. They just have precision rockets that can keep it hovering or maneuvering quickly to intercept.

Ian December 15, 2008 10:11 AM

The ability to hover in a gravity field isn’t relevant to the type of system the narrator describes. This strikes me as some kind of wacky edit job. Why would they describe a space-based system with kinetic warheads and then show something hovering around in a cage? I’m not even sure this isn’t CGI.

peri December 15, 2008 10:21 AM

I fail to see the connection between this particular piece of missile drone news and ethics for autonomous robots. Furthermore, I am pretty sure Israel and South Korea are field testing autonomous killing machines which isn’t what I would call still theoretical.

Drew Thaler December 15, 2008 10:23 AM

Yeah, it’s just old SDI tech, and the linked post is excitable conspiracy crap.

As mentioned by archangel, the only thing rapidly firing is attitude rockets — not weapons. It’s not in a steel cage (implying containment of some ferocious animal?) — it’s in a cube of safety netting. The “multiple kills” referred to in the name are missile interceptions — not people.

Sure, it manages to hover on one rocket for about 30 seconds before exhausting its fuel. Whoo. That’s a nice feat of attitude control, but not particularly scary or Terminator-like. It’s also only useful for successful missile intervention assuming you can get one of these things deployed into the approximate frame of reference of a missile — which is a really huge “if”.

Paeniteo December 15, 2008 10:28 AM

I am not so sure that is significantly different from “smart” missiles like Harpoon or AMRAAM.
It is not so that it would remain in space and activate autonomously or the like.

uberdilligaff December 15, 2008 10:35 AM

ARCHANGEL and WIREDOG have the ONLY insightful posts here. They correctly point out the actual context of this ancient video. The relevance of this sort of hover test is to demonstrate/test the precise attitude control that is essential for maneuvering high velocity hit-to-kill missile interceptor vehicles — nothing remotely related to a “killing robot” referred to in the inflammatory headline.

HJohn December 15, 2008 10:38 AM

I agree that discussions need to take place now, with the wisdom that were are 1) never going to be able to predict all scenarios and 2) never going to find perfect solutions.

More needs to be considered than just rules of engagement, however important. We also need to consider the points of accountability and at what point negligence becomes criminal. And these also need to be done now, lest they become “gotcha” points later (i.e., we see an opportunity to draw the rules after the fact based on our pre-existing biases, for or against, those it could impact).

If a programming error causes inadvertant deaths, at what point does it become criminal? Similar to what happens when brakes fail on a car, at what point is it just an accident that is an unfortunate fact of life, or at what point do we prosecute someone for negligence? As we all know, mistakes are unavoidable. As we also all know, they are more likely when there is no consequence. Where to draw the line is no simple decision.

Will make for some interesting debates.

billswift December 15, 2008 10:38 AM

re Laumer and Saberhagen
Bolos and Berserkers are dramatic fiction, though I have to admit the Bolos have it all over the Berserkers for plausibility. Saberhagen is popular because he sacrificed plausibility to soap operatic drama.

A better story is David Brin’s short “Lungfish” in his collection “Rivers of Time” where the galaxy is full of warring pro-life and anti-life machines and living beings are mostly in hiding (those that are surviving anyway). Unfortunately, humans don’t know and have been broadcasting our existence and location for decades. The story is about a group of pro-life, but barely functional, machines hiding in the asteroids, and a scientist with a robot assistant who finds strong evidence of the war. This war situation would also explain why there are apparently no other intelligent radio sources out there.

Ride Fast December 15, 2008 11:42 AM

This vehicle is just the support platform for an anti-missile missile. It’s intended to defend a soft to medium target against things like RPG’s or light artillary.

It only needs to stay up for a few seconds to allow targeting and launch by the autonomous defense system. Some maneuverability is needed, plus a stable launch platform.

The system would need many such units to defend against multiple attacks.

http://ridenshoot.blogspot.com/2008/12/star-warsian-hovering-kill-droid.html

Andrew December 15, 2008 11:56 AM

@billswift

“Unfortunately, humans don’t know and have been broadcasting our existence and location for decades.”

So basically, bad television is really going to get us all eventually . . .

sidelobe December 15, 2008 12:17 PM

I’m less worried about the programmable killing robots than about the non-programmable killing robots already in use. Minefields (both land and sea) were deployed without regard to The Three Laws, or any other rules. Maybe lingering minefields represent the biggest argument for defining such rules of war today.

At least this old demo is fun to watch.

Andy Dingley December 15, 2008 12:56 PM

Some rocket science: late-phase course correction for fast vehicles outside the atmosphere has sod-all to do with rocket propulsion at sea level pressures and trying to hover “sideways” with zero forward speed. Impressive, but not the same game at all.

This is a T30 Killbot, optimised for hunting down fleshies. Probably deaf fleshies.

Possibly the scariest part of the video was the music. Could anyone listen to that and distinguish it from anime? What’s next? Robert Stevens in cosplay? “Main screen turn on”?

Kashmarek December 15, 2008 1:34 PM

If and when such devices are deployed, whether targeted to flying weapons or unarmed humans, if the intelligence behind the tool is the same that puts children on the no-fly list, we are all dead.

Timm Murray December 15, 2008 1:35 PM

The Asimov stories in themselves are examples of why the Three Laws won’t work. In nearly every story of the original “I, Robot” compiliation, the laws combined in ways that appeared to contridict themselves, demonstrating that you can’t make a robot “safe” using logical rules like this.

However, a more useful debate is if they can be better than human soldiers under pressure. There’s likely to be a headline sometime in the future like “Robotic Soldier Holds Innocent Man in Basement for 50 Hours with no Food or Water”, and people will go ballistic. They might ignore the fact that human soldiers pulled many such atrocities, and the robots (might) do the same statistically less often.

HJohn December 15, 2008 2:00 PM

@Timm “The Asimov stories in themselves are examples of why the Three Laws won’t work. In nearly every story of the original “I, Robot” compiliation, the laws combined in ways that appeared to contridict themselves, demonstrating that you can’t make a robot “safe” using logical rules like this.”

I also remember the movie 2010, and how it explained how the why HAL killed the passengers in “2001: A Space Oddesy.” Basically, he was given a rules on how to finish the mission with the crew, and rules on how to finish the mission without the crew. When the crew aborted the mission, he could not finish the mission with them, so he switched to how to finish the mission without them–and when they got in the way, he exterminated them.

I did a paper in a DSS class on Articifical Ethics in 1997. It dealt with the accountability and programming issues. Years later, i see some of the truth and some of the holes in it. Problem is, the real world can throw so many complexities into a situation that even the most intelligent and moral of human’s have trouble making a good decision. Undoubtedly, there will be factors that robots will not be able to process, at least not for years well beyond the initial development.

John Campbell December 15, 2008 2:01 PM

Barrayar starts receiving these transmissions and declares war on the source, not realizing that they’re declaring war on their ancestors…

(Barrayaran officer to building guard) “I told you to cover the exits! How did they escape?”

(Barrayaran building guard) “They left through an entrance, sir!”

Johns December 15, 2008 2:37 PM

Robotic logic should contain a hierarchy when it comes to preserving human life, just as humans do in war, law enforcement and the medical community.

Thus, the programmer will need to rank human life. Classifications could include combatants vs non-combatants. Gender. Race. Nationality. Profession.

Regarding the latter, how would you rank a) suicide bomber, b) politician, c) attorney, d) technology geek.

Skorj December 15, 2008 3:10 PM

@ Johns “how would you rank a) suicide bomber, b) politician, c) attorney, d) technology geek”

As long as the robot kills a sufficient number of telemarketers, any other casualties are acceptable collateral damage.

Lou the troll December 15, 2008 3:17 PM

Regarding rules and AI: The argument exists that you aren’t intelligent if you can’t determine when to break the rules.

RH December 15, 2008 3:19 PM

This sort of control is cakewalk, these days. When college students are learning to solve the inverted pendulum problem in sophomore year, hovering is rather trivial… especially with defense budgets.

What has me interested is the potential of swarms of unmanned vehicles communicating. That sort of stuff has a lot of potential

HJohn December 15, 2008 3:22 PM

@ Lou the troll: “The argument exists that you aren’t intelligent if you can’t determine when to break the rules.”

I think that’s a very fair point.

Jon December 15, 2008 4:04 PM

@ bsr1826:
“Don’t deploy until it’s certain they will not violate the Geneva Conventions and other highly regarded bits of international law.”

Assumes respect for GCs and international law by those who should know better (see: Guantanamo Bay, Abu Ghraib(sp?), “illegal combatant”, rendition, etc).

In other words, it assume sthat those programming the ‘bots (or, rahter, those instructing those programming the ‘bots) are acting in good faith, honourably, and within the law(s).

That’s a rather large assumption, if you ask me.

biggles December 15, 2008 5:38 PM

We already have autonomous killing machines. They’re called mines, and despite attempts to the contrary, they’re still quite indiscriminate.

I’m not sure what makes anyone think that the powers that be would require anything different in new weaponry.

The command and control exists not at the kill point, but at the decision to deploy in a given place, and that’s unlikely to change barring some REAL advances in AI and cheap processing.

archangel December 15, 2008 5:48 PM

@RH: Excellent point. Math only gets easier the more generations we work the problem set. This is the difference between the MKV hardware (I keep thinking Mk. V when I type that) and the Smart Rocks/Brilliant Pebbles hardware. Find the old footage and watch how well it controls attitude, altitude, and position. Plenty of bounce and wiggle. This generation of hardware has fixed the old problems — its fine control is good enough to inspire comments that it’s CGI.

The networking feature is very nice. The question is redundancy of control nodes. The demo sells it on the point that each swarm has a control node, and I assume that’s set arbitrarily, not hardwired.

Anonymous December 15, 2008 6:14 PM

I was there when the first test of this vehicle was conducted. It was around 1992 give or take a year or two. This is ANCIENT stuff. The video from the first test was memorable because you could hear the shouts of “GO BABY GO!!!” from one of the engineers who built it.

This video is a new test of the same technology. It looks just like the old video from years ago. I think the vehicle may be slightly bigger, hard to tell. But just from the watching the video it looks the same. I would be curious to know what the real advances have been since then.

I can’t believe this program has been going for all these years. Everything there always seemed so transitory (both programs and people) and on the verge of being cancelled at any time. I worked there 5 summers as an intern throughout high school and after my first year of university.

This was not classified or anything back then. They played this all over the place including on the base news channel. After I worked at Edwards I got involved in the dot com scene and almost completely forgot about this thing. What a blast from the past!

Chris Finch December 15, 2008 9:44 PM

Good to see this story featured.

What is everyone getting for christmas?

I’m getting 100 new identities and credit cards to play with.

Have a good one everybody!

TB December 15, 2008 11:59 PM

wiredog, thanks so much for posting that video.

I was going to mention that this is just new footage of an old machine (with some updates).

I know this because I edited that video back in 1990 or 91 when I was working for a large defense contractor (not the prime contractor on this, just got their footage and had to incorporate it into a pitch to the DoD).

At any rate, this is just more Star Wars related crap, and is only really threatening to our wallets, not us.

Wolfie December 16, 2008 6:09 AM

I think people especially engineers are very pleased with them self with the little they achieved like wow its a flying shooting ballistic decision making robot and it work yeaj eventually the only darn thing it could do best is make a good cappuccino. “aah dude thats sweet”.”1 billion to do that yeah”

gypsydavey December 16, 2008 9:15 AM

The last line in this video is priceless. It says that in response to countermeasures and debris meant to confuse defensive weapons, this device will kill them too. Uh, isn’t that exactly what countermeasures are supposed to do? Use up the kill vehicles on junk?

Military contractors are the supreme practitioners of “tout the bug as a feature”.

Peter E Retep December 17, 2008 6:33 PM

The key question is: why do they need to be autonomous? Asked by: The other Alan on December 15, 2008 12:55 PM

The key answer is: Deniability of responsibility through a plausibly abstracted interface.

As long as the liability for action is not remote, then neither will the control be. As soon as it is, corrupt acting individuals, who look for deniable options to exploit, will have found yet another mechanism to exercise their will.

Peter E Retep December 17, 2008 6:49 PM

This brings to mind the current debate over pre-positioning USArmy & other military as trained to exercise emergency disaster rescue functions – and whether this violates the Posses Commitatus Act.

The Dodge is that it doesn’t violate the act, until it does violate the act.

Assurances of military response limitations by those favoring it break down to a key question:
When a military unit in an emergency response reports that it is, or feels it is,
in [immenent danger of being] under hostile action by local residents,
and then the local unit C.O. orders his soldiers to go armed to protect their fellow unit’s soldiers, – (now the real key question):
will the soldier accept the order to arm and possibly fire on civillians, or will the soldier arrest his sargeant, lieutenant, or captain for giving an illegal order?

The uber lesson seems to be – if you can complicate the boundary with enough complexity, then you can redefine crossing the boundary as “not crossing the boundary”, so that corrupt acting individuals, who look for deniable options to exploit, will have found yet another mechanism to exercise their will, and excuse themselves by saying they acted within the scope of their boundaries.

Bill McGonigle December 18, 2008 5:08 PM

I’m not so sure that all this technology in warfare is a good thing for those in the computer industry. When do defense programmers start getting Secret Service details?

bob December 19, 2008 1:23 PM

exactly as above. Even if you have the 3 laws, what is to stop someone from redefining what ‘human’ means? What if ‘insurgents’ are no longer deemed to fall into the category of ‘human’.. and so on and so forth for every term used. Also, these laws, worded as they are, presume a level of machine intelligence that is vastly beyond anything possible today.

John Waters December 22, 2008 2:18 AM

As a bonus, those not killed by the MKV can easily be tracked by the trails of urine and tears that they leave in their wake as the flee in terror….

MysticKnightoftheSea March 3, 2009 6:00 AM

Brings to mind Dean Ing’s “Butcher Bird”, scarily plausible fiction.

MKotS

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.