Ethics of Autonomous Military Robots

Ronald C. Arkin, “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture,” Technical Report GIT-GVU-07011. Fascinating (and long: 117-page) paper on ethical implications of robots in war.

Summary, Conclusions, and Future Work

This report has provided the motivation, philosophy, formalisms, representational requirements, architectural design criteria, recommendations, and test scenarios to design and construct an autonomous robotic system architecture capable of the ethical use of lethal force. These first steps toward that goal are very preliminary and subject to major revision, but at the very least they can be viewed as the beginnings of an ethical robotic warfighter. The primary goal remains to enforce the International Laws of War in the battlefield in a manner that is believed achievable, by creating a class of robots that not only conform to International Law but outperform human soldiers in their ethical capacity.

It is too early to tell whether this venture will be successful. There are daunting problems
remaining:

  • The transformation of International Protocols and battlefield ethics into machine usable representations and real-time reasoning capabilities for bounded morality using modal logics.
  • Mechanisms to ensure that the design of intelligent behaviors only provide responses within rigorously defined ethical boundaries.
  • The creation of techniques to permit the adaptation of an ethical constraint set and underlying behavioral control parameters that will ensure moral performance, should those norms be violated in any way, involving reflective and affective processing.
  • A means to make responsibility assignment clear and explicit for all concerned parties regarding the deployment of a machine with a lethal potential on its mission.

Over the next two years, this architecture will be slowly fleshed out in the context of the specific test scenarios outlined in this article. Hopefully the goals of this effort, will fuel other scientists’ interest to assist in ensuring that the machines that we as roboticists create fit within international and societal expectations and requirements.

My personal hope would be that they will never be needed in the present or the future. But mankind’s tendency toward war seems overwhelming and inevitable. At the very least, if we can reduce civilian casualties according to what the Geneva Conventions have promoted and the Just War tradition subscribes to, the result will have been a humanitarian effort, even while staring directly at the face of war.

Posted on January 28, 2008 at 7:12 AM76 Comments

Comments

aikimark January 28, 2008 8:00 AM

Bender: “You know what always cheers me up? Laughing at other people’s misfortune. Hahaha!”

=======
Bender: [while sleeping] “Kill all humans, kill all humans, must kill all hu…”

[while sleeping] “Hey, sexy mama… Wanna kill all humans?”

=======
Maybe if we programed robots to kill lawyers, we’d establish some time buffer for the preservation of the rest of humanity. Can lawyers truely be considered fully human?!? It’s such a grey area. 😉

Nicholas Weaver January 28, 2008 8:39 AM

One thought: The work is interesting, the spin might keep it from getting traction.

When you build your killbot [1], you want these levels of control because this is effectively “good target discrimination” and “minimal collateral damage”, if you just do a search-and-replace of “ethics” with “rules of engagement”

Because even if you want to do the “Kill all humans” routines, you don’t want to waste ammo shooting at Number 6 or a Sharon.

[1] “We can always make more killbots” -Bender

Andrew Gumbrell January 28, 2008 8:52 AM

I have long believed that Azimov’s Laws of Robotics are the only safe rules to apply.
If we teach ‘thinking’ machines to kill humans, how much time will we have left before they perceive that all humans are a problem to be eliminated?

Captain Obvious January 28, 2008 8:53 AM

I think we’re way overdue for a similar set of rules on bombs. In time of war, bombs should somehow magically just know who’s a good guy and who’s a bad guy. No country should ever even consider using bombs unless there’s no chance whatsoever of them injuring anyone except enemy soldiers, members of the enemy command and control hierarchy, and possibly those who voted for the party in power.

Let me know if I’m off base here…

kurt wismer January 28, 2008 8:58 AM

y’know, pop-culture is full of depictions of what can go wrong when you build autonomous killing machines (from terminator to screamers to blade runner to that episode of st:tng with the holographic weapons salesman on a dead world)…

considering our persistent ability to realize the unintended consequences of our actions in other spheres, why do people keep trying to build these things?

it seems to me that even ethical autonomous killing machines are a bad idea…

Jerry Cornielious January 28, 2008 9:07 AM

/me looks around for an inner-tube, or a flat of cardboard….

This is a slippery-slope if I ever saw one – disconnecting the human from the trigger is a very bad idea.

.02

Kees January 28, 2008 9:11 AM

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Kashmarek January 28, 2008 9:19 AM

Now, if we could only get those who control the robots to follow the same ethical standards. Oh, maybe they already follow the standards they want the robots to use.

bear January 28, 2008 9:39 AM

What is the point of having a war if it is going to end up technology against technology. Not that I condone killing people but how can there be a winner unless one side concedes?

great idea if only one side has it and is trying to save their own lives but if one has it, the others will shortly.

Wasn’t there an older Startrek episode where two worlds fought their wars on computers and when your area was deemd hit, you went to a machine to terminate your existance?

FP January 28, 2008 9:41 AM

Ethical autonomous military robots will predict the probability of victory in the face of an opposing force (of autonomous military robots?) and surrender without a single shot fired.

Pawn to e4 — checkmate in 43 moves!

jack c lipton January 28, 2008 9:47 AM

All right, so are we talking about Bolos? Or will Bolos carry smaller infantry-ish support robots?

(laughs)

Roses are Red,
Bolos are Blue…

I love Movies January 28, 2008 9:54 AM

What kills me is that all these scientists and others working on robots seem to have forgotten what happens when AI is given weapons to control.

See also: Terminator series, The Matrix series.

Bigfoot January 28, 2008 9:57 AM

I don’t see how a robot differs much from a land mine in the ethics department. Both are autonomous devices intended to kill people, perhaps the robot is a bit better at discerning between friend and foe but that doesn’t lessen the responsibility of the people who fielded it.

Captain Obvious January 28, 2008 10:15 AM

C’mon people… movies are fun and all, but if regurgitating sci-fi movie plots is the best argument you can muster…

Grey Bird January 28, 2008 10:28 AM

Let’s make this simple… The “ethical” robots are only going to be as ethical as those in charge of those doing the programming. Take a look at what “those in charge” are currently doing to the constitution. Do you really want to trust the ethics of those people in a robot? I think not.

derf January 28, 2008 10:38 AM

“An OCP product may not act against a senior official of OCP.”

You’re fired.

BANG

There are definitely security implications here. What happens when the coder makes a mistake in a key part of the program or hardware corruption changes the memory of this beast? Worse still – imagine the consequences of the enemy breaking into your robot army. You think a virus or trojan on your home PC is bad, imagine a Microsoft Soldier (Bob on steroids?) robot on the battlefield.

MikeA January 28, 2008 10:50 AM

“Talk to the bomb. Teach it epistemology”.
(The dead captain)
Seriously, I agree (from reading only the abstract above) that these “ethics” are more like “rules of engagment”. Combine the propensity offleash and blood troops to occasionally to “bend” those rules with the litany of Electronic Voting Machine “errors” (and that is as charitably as I can put it), and you indeed approach the level of movie-plot Armageddon.

(OTOH, there are folks as we speak working to bring about Armageddon, so…)

Todd Knarr January 28, 2008 11:11 AM

Kees: go read “I, Robot” by Asimov. Not the movie, or the book adaption of the movie, the original collection of short stories. Asimov, after creating his 3 Laws, proceeded to write a good dozen stories about situations where the 3 Laws didn’t work as expected.

Personally I think any “ethical constraint” rules hard-wired into robots will only be a good idea as long as the robots aren’t capable of independent reasoning. Once they reach that stage, well… how do you react to someone forcing you to do things?

Anonymous January 28, 2008 11:17 AM

@bear
You need to read more science fiction — try some of Aszimov’s short stories on the subject, and then move on to the Star Trek and Star Trek;TNG series for similar episodes…

Chris S January 28, 2008 11:20 AM

MikeA:

Isn’t that “Talk to the bomb. Teach it Phenomenology”?

I think that no matter what we call it – even rules of engagement – we’ll find that actually setting the rules so that WE agree with them will be the hardest part.

We can’t even do that now in order to perfectly tell other people how to behave. There’s always unforeseen loopholes.

Chris January 28, 2008 11:22 AM

So if we’re to create a machine that can make ethical decisions as regards to taking a human life, will that machine be able to refuse an order to kill? Would any nation entrust the lives of its citizens to something can disobey orders?

Morality and ethics vary widely amongst human populations; we can’t always decide for ourselves what is moral or ethical yet we’re going to create machines that can? And we’re going to assume that we can test the machine’s morality thoroughly enough to trust it with the use of lethal force in less-than-benign environments?

I believe systems such as the one proposed are going to be a very long way off.

Sean January 28, 2008 11:30 AM

Didn’t we already try this with mustard gas ? The friendly fire will be pretty awesome when it happens. The friendly robot detects soldiers and eliminates them. Can’t think of anything fairer than that.

aikimark January 28, 2008 11:32 AM

I miss the old BattleBots show on Comedy Central. No humans were hurt in the production of that show. Why can’t we just fight by proxy? Best engineering wins.

Is it beyond hope that battle mechs might be controled by semi-benevolent AI, like Tweedledee and Tweedledum?

paul January 28, 2008 12:02 PM

If you tell a robot tank to shoot a group of prisoners, does that absolve you of responsibility for a war crime?

From the short version, it sounds as if this may be a much more sophisticated version of the limits coded into CIWS and other self-targeted weapons to stop them from shooting into or through the installation they’re supposed to defend.

Michael Richardson January 28, 2008 12:07 PM

@possibly those who voted for the party in power.

I’m trying to figure out how that would work in Florida. Do they have to know that they voted for the party in power? (i.e. does it need to know their intent?)
And if they intended to vote for the party in power, but failed due to hanging chad, are they spared?

peri January 28, 2008 12:41 PM

For those who wouldn’t recognize Ronald Arkin’s name, he literally wrote the book on modern robotics; see “Behavior-Based Robotics.”

The article does not mention Terminator style movies but he does directly address Asimov’s laws:

I suppose a discussion of the ethical behavior of robots would be incomplete without some reference to [Asimov 50]’s “Three Laws of Robotics” (there are actually four [Asimov 85]). Needless to say, I am not alone in my belief that, while they are elegant in their simplicity and have served a useful fictional purpose by bringing to light a whole range of issues surrounding robot ethics and rights, they are at best a strawman to bootstrap the ethical debate and as such serve no useful practical purpose beyond their fictional roots. [AndersonS 07], from a philosophical perspective, similarly rejects them, arguing: “Asimov’s ‘Three Laws of Robotics’ are an unsatisfactory basis for Machine Ethics, regardless of the status of the machine”. With all due respect, I must concur.

Dom De Vitto January 28, 2008 12:42 PM

The long-standing Sci-Fi prophecy of intelligent machines rising up to enslave and destroy the human race has disappeared from modern culture.

As far as I can tell, this coincided with the release of MS-DOS.

pavel January 28, 2008 1:07 PM

When my company made plans to work in the defense and security sector, I was tasked to come up with a set of rules defining how far we would go there.

For me it is generally immoral to create or build a machine that automatically kills people. The mentioned Land Mines also fall into this category, as well as systems like the Oerlikon GDF-005 robot cannon that killed nine soldiers in south africa in October 2007.

When I build a gun and someone pulls the trigger, he has to cope with it.

When I create a rule based system that pulls the trigger, it is actually me who makes the final decision who gets killed.

Even if I could create a perfect and flawless system, it still might kill anyone under certain circumstances, and that includes myself or my kids.

However, in reality every such system will be heavily flawed, and it will kill innocent people sooner or later, even if it is a simple system like a land mine.

Using a complex AI-based system today or even in the foreseeable future for the purpose of making kill decisions is stupid and dangerous bullshit, given that we did not make much real progress in the last 60 years of AI.

I think any machine with the purpose of harming people that does not require a conscious and present human decision to do so should be outlawed everywhere and forever, no exceptions.

Autonomous killing machines are not only a very bad idea, they are highly condemnable.

No amount of built-in ethics can change that.

am January 28, 2008 1:20 PM

If we are concerned about a future desensitized to war, we’re already there. I feel so far removed from the daily horrors of a war my country is currently fighting that it is almost like it is already being fought with robots rather than humans.

havvok January 28, 2008 1:43 PM

“The creation of techniques to permit the adaptation of an ethical constraint set and underlying behavioral control parameters that will ensure moral performance, should those norms be violated in any way, involving reflective and affective processing.”

That is far and away the scariest thing I have ever read in the context of weapons development.

The ability to detect and select new targets effectively on the battlefield would be very cool, the ability to analyze and infer new rules about target selection, based on machine interpreted rules about morality, not so much.

Charles Choi January 28, 2008 2:10 PM

Kyle has nailed this, especially when looked in the light of asymmetric warfare; using robots to do the dirty work of war will only enhance the morale of those being shot at. Think of it from the other side: “they’re so cowardly that they have to send machines to kill us.”

Roy January 28, 2008 2:31 PM

The slippery slope began with self-propelled torpedoes and naval mines. Land mines followed soon enough.

Now we will see autonomous weapons actively discriminating among friendlies, hostiles, and neutrals. In the ‘weapons tight’ condition the rule is to kill only hostiles; in ‘weapons free’, it will kill anyone not friendly.

What exactly does a hostile look like? A friendly? A neutral What algorithm will tell them apart?

If we equip friendlies with transponders, then most of the time the automaton won’t kill them, but it will kill all the non-transponders it can — hostiles, neutrals, and unlucky friendlies. Building better transponders might seem like a good idea, but the money is made mass-producing them, so once mass-production begins, mass-counterfeiting begins.

Killbots would be safer than indiscriminate weapons like a hydrogen bomb, but why would we want a weapon that will kill our own kind, and will kill neutrals indiscriminately? I thought the whole point of war was to make fortunes building weaponry, and then make bigger fortunes replacing the weaponry destroyed or outmoded.

fusion January 28, 2008 2:53 PM

Then there is the fundamental problem sketched by Korzybski; semantics. Excuse my fumbling to bring this to the surface. Arkin may be wise; but words are abstractions linked to real-world processes by intellectual structures – which humans construct and use differently.

Whose “ethical” are we using?

Jack C Lipton January 28, 2008 3:26 PM

The complaint about dredging up Sci-Fi scenarios is that Sci-Fi and other speculative fiction tends to be better at illuminating dystopias rather than utopias… because dystopias tend to be what we NEED to worry about.

Beyond that, looking at what passes for political leadership we have, who set the ethical rules? Who reviews them?

Personally, I want a living thing somewhere in the decision loop.

woot January 28, 2008 4:18 PM

I look forward to these rules as an addition to Genevia Convention rules – (or a new version of them). Understanding how and when robots should be used in warfare is something we should do.

I don’t think it would be too hard for one disciplined in the arts to take an RC car, add an imaging sensor (for face/place recognition), and program it to carry a “package”. I would prefer there be rules about how close the next non-target person is for a device like that. (Of course – I would prefer this thing to not exist at all…)

Granted – not all people would agree or use these rules. But the more countries that do, the safer each of us is going to a baseball game.

Peter E Retep January 28, 2008 5:04 PM

As a think piece, it’s barely O.K.
As a command rubric, the bias
in favor of presuming total obedience of a system is a serious – perhaps literally fatal – flaw.

NotME January 28, 2008 6:19 PM

We don’t need fancy ST plots with simulated war, or even fanciful EMP devices like The Matrix.

All combatants just need to wear CAPTCHAs to confound the machines for days.

@ all those who called this reprehensible
Amen! War is bloody and -usually- profitable. Without the blood, it’s just profitable. War without blood = murder for hire.

ZaD MoFo January 28, 2008 6:25 PM

The robots will become an extension of our actions, both in our factories and in our armed gestures. Is it desirable to choose a model of human cognition to draw up behaviour (with its invariable weaknesses and the ability to use social engineering to neutralize them) to be this model? The era of Terminators seems close if one considers that modern homes will soon present several robot like the VCR and it was less than twenty years ago.

Bond January 28, 2008 6:30 PM

We’re trying for the wrong standard. Trying to make ‘ethical autonomous weapons’ is not the issue, trying to make ‘more-ethical-than-humans autonomous weapons’ is undeniably an excellent idea. While it may well be very difficult to make truly ethical robots, the standard of making robots that behave themselves better than humans is pretty easy, really.
It should be noted that armies go to considerable efforts to prevent their soldiers making decisions, hence ‘rules of engagement’ rather than an instruction to ‘be ethical’.
We worry about robots turning against us and destroying us, but we should remember that humans have been turning against each other and destroying each other for a long time, and it’s still easier to make more people than it is to make more robots. When robots have a mechanism in place to evolve, then we’re in trouble.

paul January 28, 2008 7:12 PM

This kind of work also reminds me a bit of the work Mark Stefik at PARC did in the 90s on rights definition languages for digital content — machine-readable, formal languages that would allow content publishes and content purchasers to negotiate how much the purchasers would pay for how many copies made under what conditions and so forth. It was very elegant, but since the only position most publishers wanted was “We keep everything. Take it or leave it.” the idea never reached terribly widespread distribution.

Jim Lux January 28, 2008 8:37 PM

Didn’t Norbert Wiener (of “Cybernetics” fame) discuss this (the ethics/morals of automated systems that kill people) in some of his works 50 or more years ago? I can’t think of the titles off hand. “God and Golem” or “The human use of human beings” might be the ones.

It comes down to a very complex issue of where do you draw the lines. How is sending targeting information to a B52 (with a human pilot) flying above the clouds different than sending the same targeting info to a ICBM?

SolarSauna January 29, 2008 2:07 AM

It is not clear if this type of ethical control and reasoning system for robots could prevent incidents like: a) the genocide in Rwanda, b) ethnic cleansing in Kosovo, c) forcing children to fight in armies like in Liberia and Congo, d) cutting off a hand of the enemy to overload their medical facilities as in Liberia. Once these robotic war fighters are developed they would become the prime objects for sale by unethical(?) arms dealers. What’s to prevent their algorithms from being altered to gain an advantage in combat. Will the software designers who made a mistake or an unauthorized alteration be charged with a war crime?

averros January 29, 2008 3:09 AM

Oh, how cute. The people manifestly lacking any ethics (in that they think nothing of wasting money expropriated from people on advising how to do automated mass-murder “ethically”) are writing about robot ethics.

There’s nothing complex about the issue of killer robots. They’re simply elaborate traps – and all “ethics” of such traps comes from the intent of their use by their owners.

Malcolm January 29, 2008 5:14 AM

Bruce, you missed another “daunting problem”: doubt.

Even if we nail the problems you list, we still end up with a bunch of killing machines that are absolutely certain that they’re doing the right thing. History shows that that’s a recipe for genocide.

aikimark January 29, 2008 7:45 AM

@Jack C Lipton

“Personally, I want a living thing somewhere in the decision loop.”

What if that living thing were:
* Dick (shoot-for-the-face) Cheney?
* Charles Manson?
* Ted Kazinski?
* Karl Rove? (generic neocon example)
* George Bush? (generic Forest Gump example)

badfrog January 29, 2008 12:49 PM

These implications were all worked out in science fiction in the fifties and sixties. The first (I think) was “Cordwainer Smith’s” manshonyaggers (menschenjaggers, or manhunters), which were still patrolling their area after 50,000 years. Probably the most thorough were the Berserker stories of Fred Saberhagen. I remember the BOLOs, too. Norman Spinrad wrote the Star Trek story “The Doomsday Machine,” which introduced the idea to a wider audience without any real ethical implications. There must be hundreds of stories laying about in moldering stacks of old Galaxy, Analog, and Amazing magazines.

It takes twenty years to raise a soldier, and months to train him well.

If you can manufacture and program them by the millions, it’s much cheaper in terms of time, emotion, and supply. Attrition is now an attractive option. Think of fighting the Iranian or Chinese “human wave” strategy with a “robot wave.”

However, the U.S. combined arms strategy is far more effective in terms of massed battle. Bombs are much cheaper than robots

But think of sending a robot with advanced AI and a machine gun into a crack house in Detroit or a safe house in Baghdad. The thinking is to save some good guy lives.

But it will be just as safe and still more effective to send in a robot controlled by a human hand and mind, such as the flying drones currently employed over Iraq, Afghanistan, and Israel.

For the forseeable future, robot warfare will probably be defensive, i.e., a machine gun that fires on anything that crosses an infra red beam. Kind of like an automated grocery store entrance, only noisier.

Anonymous January 29, 2008 2:10 PM

@aikimark

Well, think of the two-man rule for launching nuclear weapons as part of the fail-safe, but, yeah…

“I built the M-5 with my engrams” – Dick Cheney.

If they built the M-5 with Dubya’s engrams it wouldn’t be able to add, much less do any real damage.

Jack C Lipton January 29, 2008 2:11 PM

@aikimark

Well, think of the two-man rule for launching nuclear weapons as part of the fail-safe, but, yeah…

“I built the M-5 with my engrams” – Dick Cheney.

If they built the M-5 with Dubya’s engrams it wouldn’t be able to add, much less do any real damage.

DaveK January 29, 2008 3:10 PM

The whole thing seems like a massive exercise in question-begging to me.

“If only we knew how to make an autonomous battlefield robot, we could make an autonomous ethical battlefield robot, if only we knew how to make it ethical”.

Diodotus January 29, 2008 11:58 PM

A great deal of faith is being placed here on the idea that the generals, civilian policymakers, and their minions in the R&D industries want the troops to behave well, and it’s just the bad apples who muck things up. A lot of history suggests otherwise. Maybe we need robot robot-programmers, as well…

kmax January 30, 2008 10:00 AM

In contrast to most people’s comments, I think such an ethical
robot could be a very good thing, even considering myself a pacifist.

As history has shown, war is inevitable. Most unfortunate.
As history has shown, soldiers commmit crimes and kill innocent people.
Most unfortunate, too.
As history has shown, software has bugs, so will the robots,
and kill innocent people. Most unfortunate, once more.

The question is: who is more likely to follow ethical rules and the
Geneva convention in extreme situations?
Who will end up with the smaller number of killed innocents,
a group of n soldiers or n robots?

drachenchen June 26, 2008 6:59 AM

Looks like I missed the discussion, along with most of the American people. This is typical of most weapons programs. The very idea that with the cult of secrecy, AND the cult of brutality, in place, that anyone connected with the military is qualified to develop an “ethical weapons system” of any kind, is beyond laughable. It is perhaps laughably obscene, in the vilest sense. This nation has just finished invading a foreign nation on false pretenses for the proven purpose of stealing their natural resources. This nation’s military has killed over HALF a MILLION “non-friendlies”, and maimed many more, including children, while officially “keeping civilian casualties to an absolute minimum” with “tightly targeted munitions”.

Calling it a “smart bomb” obviously doesn’t make it smart. When retarded whackos are calling the targets, that makes it a “retarded whacko bomb”. When fascio-Christian nut-bars are telling it where to land, it becomes a “fascio-Christian nut-bar bomb”. Insert either of those two modifiers in place of “ethical” in your discussions of autonomous battlefield robots, and you’ll get a far more accurate picture of how they will behave. “Yes, we’re building an autonomous crypto-capitalist, greed-oriented battlefield robot. It will only kill people standing in the way of taking OUR oil, or OUR food away from them.”

-And now that the Bushies have designated North America as an operational theater for the very first time, think about how it will be to have one of these killing machines on every street corner…

I also find it hilarious how there is NO discussion about how you will manage to prevent damage to the robot’s so-called “ethical” circuits. You know, damage? -Like what happens during actual battle?

Oh, and one last thing, how will you folks who are even THINKING about developing these things find out the meaning of the word “ethics”?

bob December 15, 2008 10:21 AM

Considering how many times in a single day I am thwarted from a reasonable goal by the invalid assumptions of the person who programmed the logic of the device in question (Chevrolet windshield wipers & vent controls, Windows Vista, elevators just to name a couple) I am scared by the idea of an ‘autonomous’ device capable of killing people (I mean as its end design goal, not by accident – machines have probably been killing people by accident since right after the wheel was invented).

The other Alan December 15, 2008 12:55 PM

My question is: why do they need to be autonomous? We have unmanned (yet, human controlled) armed drone aircraft, why the need to make these autonomous? Anyone?

I mean, I can understand the desire to want to do so, but the downside is just to huge to ignore.

ChrisTheEngineer December 17, 2008 7:36 PM

There is no possible way that for the foreseeable future that “ethics” can be coded. None.

Autobot April 7, 2009 3:28 PM

It’s simple.Whatever we ever design or develop will have a mirrir reflection in a “dark side”.WHY do we habe to make prophets of all classical Sci-Fi writers:Assimov, Stanislaw Lem , Arthur C.Clark ,Phillip K.Dick? They wrote about that stuff decades before.Do you still consider Bladerunner or Terminator a sci-fi? If you do, than you are a naive person.How about “War Games” or the awesome “I Robot”.Can we ever learn that every great discovery brings destruction and THAT is what needs to be controlled.Machine can never make a correct decision , because most of our life is built on what most robots(morons) would consider not logical at all.Not all logical moves result in logical outcome.I love a comment with chess….

Sutobot April 7, 2009 3:31 PM

Didn’t we have an autonomous robotic systems in nuclear silos and it almost caused a nuclear war in real life???What year was it???

Anonymous April 7, 2009 4:03 PM

@Sutobot:
No. You might be thinking of the fictional movie, “WarGames”. Or you might be thinking of one of the several incidents where alert systems caused preparations for launch to begin. However, real nuclear silo control procedures are not at all autonomous and have several layers of human checks and balances. Contrary to some sensationalistic accounts, none of those incidents actually got beyond initial preparations, because they incorporated backup checks which failed to validate the initial alarm.

Sutobot April 8, 2009 12:42 PM

@Anonymous – oh , so you know that War Games were not based on a fictional scenerio.Good !!! It DOES NOT matter on what level robot srews up , because if there were no safety to validate the alarm the misles would go up.Simple , isn’t it? We are talking about autonomous robots here.Look up autonomous if you don’t know the word’s meaning.NO SAFETY , NO VALIDATION – SELF DECISION. And safety block? Give me a break.First wacko is going to disable it and I bet a few will die in labs.

UNIT 01 April 11, 2009 11:31 AM

While I agree that reckless development and deployment of autonomous armed devices might pose a threat, I still think that, if properly implemented (yup, I know that is one hell of a caveat) these “fully automated armed military systems” will lead to more precise application of military force and less civilian bloodshed and suffering.

A war waged through properly implemented autonomous “war-bots” will be more humane towards noncombatants.

Why?

Because:
1) A fully automated armed military system is not bound to have self-preservation as strong as a human combatant.
While human soldiers are likely to use lethal force in response to a slight hint of unfriendly behavior (even if ordered not to do so) due to inherent human self-preservation instincts and combat stress, a “war-bot” which lacks such human instincts will strictly follow the “innocent until proven guilty” maxim if ordered to do so, thus preferring to err on the side of misdetecting a combatant over accidentally killing the proverbial “girl who tries to offer war-bot an ice-cream”.

A properly implemented fully automated armed military system will not suffer “morale loss” over the fact that following such orders increases the threat to its existence.

Also, a war-bot following such “civ-friendly” orders will not incur the horrible costs in regards to morale at home, as there are no friends or relatives to mourn a war-bot.

2) War-bots are EXTREMELY unlikely to develop truly sadistic inclinations (unless programmed to do so)

3) Properly implemented war-bots will not develop “war-time nationalism”, thus will not be hostile to all people belonging to the ethnicity common among enemy combatants

4) A properly implemented fully automated armed military system will not demonstrate “personality traits” (not a term that suits a robot well, but I hope you forgive me using it to describe certain behavioral patterns common to humans) that are likely to instill hatred and hostility in civilian populations in the warzone and at home (robots will not throw puppies)

5) A properly implemented fully automated armed military system will not demonstrate negative “personality traits” usually arising due to war-time stress and overall criminologenic environment of war (robots will not rape women “because they can)… Unless programmed to do so 😉

6) A properly implemented fully automated armed military system will demonstrate “personality traits” that will instill trust, friendliness and generic desire to support the “war-bots” in allies and non-combatants.

7) Once a “skill” (like “detect suspect suicide bombers from gait, heartbeat and other behavioral and remotely-detected physiological properties”) is implemented in AI, replicating it to other machines is cheap and easy. Replicating skills in humans is expensive, hard, and not very reliable.

Off course, “civ-friendly” war-bots will suffer additional losses due to guerrillas taking advantage of their “civ-friendliness”, but the pace of technological progress will likely make the costs of each individual autonomous war machine comparable to that of training and equipping a human. Also, the sheer superiority an advanced war-bot will have in terms of armor, firepower and sensor capabilities will make it harder for human enemy to take advantage of.

It is also possible that AI will have additional analytical capability allowing them to deduce hostile intent from evidence that would not be sufficient for a human mind to reach same conclusions (it does not mean the AI will be “smarter than human”, it will be merely “better at spotting IEDs”)

All-in-all, I think it is possible that, thanks to AI research and advanced robotics., we will wage safer, more human-friendly wars in the future.

Clive Robinson April 11, 2009 4:36 PM

@ UNIT 01,

“All-in-all, I think it is possible that, thanks to AI research and advanced robotics., we will wage safer, more human-friendly wars in the future.”

I don’t think you actually understand what a war is supposed to achive or how…

But such a “human friendly smart bot” will not win any wars that we would understand today. Further it would fail to the simple and obvious “Gandi Attack”.

That is it would stand on a street corner impotent against a civilian population who just compleatly ignore it or sit down around it pasivly.

That is provided they don’t agress it will have no choice but to just sit there doing nothing, until it either fails, malfunctions or is redeployed in a different manner.

One of the problems that people often fail to realise when talking about “humain warfare” is that a war is about applying phisical means (force) for political objectives.

That is a civilian populas or their leaders have to be cowed into a subordinate position and accept the dictat of their opposition.

This requires that they understand that failure to acquiesce has clear, immediate and easily understandable conciquences.

Unfortunatly you also have to realise that a populas that has nothing to lose or does not fear death cannot be easily cowed or defeated and it will rapidly descend into a war of attritian.

Worse if the people or their leaderd do not have the same level of humanitarian feelings as their opponents then the opponents are fighting from what is effectivly a losing position.

A robot restricted by humanitarian ideals stuck on a street corner suronded by passive people is impotent.

It has to be able to carry out effective sanctions or it will fail. Therefor unless programed to inflict sanctions that effect humans in a clear and unambiquous way it cannot be used to prosecute a succesfull war.

Which effectivly means that civilians will get hurt or the robots will be destroyed by them…

Failure to understand the basic realities of war is one of the reasons we are currently in the mess we are in.

Look at the Government of Israel for instance, they have for many years tried all manner of physical coercian to get the palistinians to acquiesce to their dictat.

It has failed and will continue to fail unless they find an effective sanction.

The Palistinians have shown that they are prepared to lose lives and infrestructure on mass and they obviously know that the rest of the world does not have the stomach to alow the Government of Israel to perform ethnic cleansing of the Palistinians.

So unless the government of Israel decides to go down the “ethnic cleansing” route their options are limited to accepting the current stalemate war of attrition or sitting down and negotiating a proper, fair and binding peace.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.