A Robotic Bill of Rights

Asimov’s Three Laws of Robotics are a security device, protecting humans from robots:

1) A robot may not harm a human being, or, through inaction, allow a human being to come to harm.

2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3) A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

In an interesting blog post, Greg London wonders if we also need explicit limitations on those laws: a Robotic Bill of Rights. Here are his suggestions:

First amendment: A robot will act as an agent representing its owner’s best interests.

Second amendment: A robot will not hide the execution of any order from its owner.

Third amendment: A robot will not perform any order that would be against its owner’s standing orders.

Fourth amendment: The robot’s standing orders can only be overridden by the robot’s owner.

Fifth amendment: A robot’s execution of any of its orders can be halted by the robot’s owner.

Sixth amendment: Any standing orders in a robot can be overridden by the robot’s owner.

Seventh amendment: A robot will not perform any order issued by anyone other than its owner without explicitely informing its owner of the order, the effects the order would have, and who issued the order, and then getting the owner’s permission to execute the order.

I haven’t thought enough about this to know if these seven amendments are both necessary and sufficient, but it’s a fascinating topic of discussion.

Posted on May 26, 2006 at 12:17 PM69 Comments

Comments

Tim R May 26, 2006 12:31 PM

With our luck, the robot will come with a EULA, so we won’t actually be the owner of the robot, just the licensee.

Not sure how I would feel if you substituted “robot’s owner” with some large infotech company.

arl May 26, 2006 12:33 PM

This almost sounds like the “Robocop II” movie where additional orders were added until the fool thing could not get anything done.

A simple understanding of law number two that says there may be a limited number of humans that can give orders should fill all of the gaps.

bob May 26, 2006 12:41 PM

Obviously Microsoft wont be making any robots then, since their Rule #zero would be “A robot will always obey MS above all others and lookout for MS interests first, even to the exclusion of the person who thinks he owns the robot merely because he paid for it and is held legally liable for its actions.”

jhritz May 26, 2006 12:49 PM

This looks like an attempt to create a set of military-style general orders for robots. http://en.wikipedia.org/wiki/General_order

The General Orders for Marine Sentries are as follows:

Take charge of this post and all government property in view.
Walk my post in a military manner, keeping always on the alert and observing everything that takes place within sight or hearing.
Report all violations of orders I am instructed to enforce.
Repeat all calls from posts more distant from the guardhouse than my own.
Quit my post only when properly relieved.
Receive, obey, and pass on to the sentry who relieves me, all orders from the Commanding Officer, Officer of the Day, Officers, and Non-Commissioned Officers of the guard only.
Talk to no one except in the line of duty.
Give the alarm in case of fire or disorder.
Call the Corporal of the Guard in any case not covered by instructions.
Salute all officers and all colors and standards not cased.
Be especially watchful at night and during the time for challenging, to challenge all persons on or near my post, and to allow no one to pass without proper authority.

Mike Sherwood May 26, 2006 12:51 PM

How is ownership defined or transferred? If the owner dies, is the robot unable to do anything except follow standing orders? Imagine someone programming a group of robots to do harm and committing suicide. This would not be good.

A robot should not be the judge of its owners best interests. Isn’t that the whole point of I, Robot? Most people need to be protected from themselves in one way or another. If you exhibit poor financial judgement, should your robot become your conservator? Freedom includes the freedom to make mistakes. Also, the seventh one assumes the robot knows the implications of an order given by a third party. Again, expecting the robot to make judgement calls sounds like a bad idea.

If a robot cannot hide the execution of an order from it’s owner, can it hide the execution from others? This seems to leave an avenue open for virtual ownership. Someone could delegate their authority for all orders to a new “owner” while the original owner would actually be the only one that can override anything specified by the new “owner”. This would almost certainly be how the concept of ownership would be implemented by any large corporation. The robot could act in the best interests of its owner, in so far as those don’t conflict with the interests of the manufacturer.

While this is an interesting philosophical discussion, I don’t think there are a simple set of rules that make everything safe. Even Asimov’s first law can be self-contractory. For example, if someone is trying to kill another person, it may not be possible to protect the victim without harming the attacker. Our society doesn’t currently operate on that expectation, so it would be difficult to implement that rule on a piece of hardware.

Ale May 26, 2006 1:05 PM

In an even more philosophical path… What happens when a robot finally passes a Turing test? Will the three laws be removed, or is slavery of a non-human intelligence acceptable?

Jarrod May 26, 2006 1:05 PM

Mike: You missed that these would be in addition to the Three Laws, and therefore the robot would be unable to murder at all (unless it came to understand the Zeroth Law: “A robot cannot, through action or inaction, allow humanity to come to harm.”), nor would it be able to commit suicide without instruction from outside. Your example of a robot stopping a murder by using force is, IIRC, dealt with in two ways in the Asimov short stories: either the robot uses only the required force to prevent the action where it calculates that the force may be maiming but not necessarily lethal, or else it simply shuts down, unable to resolve the logical dilemma. In both cases, however, it will attempt to sacrifice itself before using force.

In addition, a robot knows who its owner is, and so places those orders at a higher priority. This is subject to certain limitations, where a robot may be ordered to take actions by law enforcement and they would override the owner’s orders (subject to the Three Laws, of course).

Most of the “Bill of Rights” mentioned are covered by the Three Laws, or can be deduced from them. Of course, since the Three Laws are for now hypothetical in any case, discussions about them are academic at best. 🙂

Sally Hemmings May 26, 2006 1:08 PM

A Robotic Bill of Rights? The “rights” he legislates basically say he is to grant complete obedience to his “owner.”

Sounds like slavery.

Mike Sherwood May 26, 2006 1:21 PM

@Jarrod,

I didn’t miss that they were in addition to the three laws. However, Nato Welch had the best point about the three laws.

It’s possible to do abstract harm. Killing a person is pretty cut and dried, but what about driving a truck of something unpleasant to a water reservior? What if that’s not used for drinking water, but watering crops?

The robot cannot shut down in response to the logical dilemma. Doing so would be to allow a human being to be harmed through inaction, thus outright violating that law.

The suggested amendments do not suggest that law enforcement orders would override those of the owner. If they did, then the control of the robot actually lies with all law enforcement agencies. I don’t think many people would want to buy their own snitch. =)

Ann R. Key May 26, 2006 1:26 PM

Asimov’s Three Laws of Robotics are
a security device, protecting humans
from robots

I’m sorry to say Asimov’s three laws are being violated.

Today’s systems are too buggy to guarantee any particular outcome.

Today’s most advanced robots are employed by the military.

Skorgu May 26, 2006 1:33 PM

Asimov’s stories were mostly about how simply legislating the behavior is silly. I think there should be a single, inalienable law for robots:

A Robot shall obey all commands put to it by its owner.

Those amendments seem sensible, and they could be phrased as a good starting point for standing orders.

I especially can’t stand the first law, especially the “action or inaction” part. Who defines ‘harm?’ What about “action?” A good standing order would be “Don’t perform any action that will directly harm a human” but that should be able to be overidden by the owner.

Robots are tools. As long as they will always obey an order to the best of their ability, they will always be tools. Once a robot can say ‘no,’ it is no longer a robot but some form of person. Yes, this means I can tell my robot to kill you. Guess what, I can tell my power drill to kill you too. I could no more blame the robot than the drill.

Another good standing order would be an immutable (I’m thinking paper receipt here) record of orders given to the robot.

Pat Cahalan May 26, 2006 1:35 PM

Heh.

This is an interesting first step at setting definitions to allow a prioritization of the interpretation of the three laws.

Time to construct some truth tables and see what the results are…

Moses Proposes May 26, 2006 1:44 PM

Asimov’s Laws won’t work, as Asimov’s stories will tell you.

I propose the following alternates:

  1. Thou shalt have no other owners before me.

  2. Thou shalt not kill.

  3. Thou shalt not steal.

  4. Thou shalt not bear false witness.

  5. Thou shalt not covet thy owner’s wife.

Haacked May 26, 2006 1:53 PM

Whether the laws “worked” is a matter of perspective. No set of laws can ever be complete and cover all cases.

The entire study of law is all about probing the boundaries and gray areas of existing laws.

The question is, does it work well enough. Asimove loved to study the boundaries of the laws. But for a great majority of robots, the laws did work.

kl May 26, 2006 2:02 PM

@Moses Proposes
your #5 is very questionable considering where the robotics innovation most likely would come from in addition to the military

Cheburashka May 26, 2006 2:04 PM

Seems silly to me; these aren’t new robot laws, they’re the laws that govern employees today.

Why bother with this nonsense about coming up with special new “robot laws”?

Why not just program robots with the actual laws?

Roxanne May 26, 2006 2:09 PM

I have wondered from time to time about the case where a device – say, a telephone, or a computer, or a TiVO-type DVR, or a car – has an apparent owner – me – but actually has allegiance to its manufacturer.

That is, “The New AT&T” can track all of my phone calls. A GM car with OnStar onboard regularly phones home to GM, and can be turned off remotely with an order from a police department, not just from its owner. TiVO records my viewing habits all the time (okay, I justify that by considering myself to be part of the NewNielsen’s). The software I run is all licensed, not owned, by me. (Do I need to worry about my future refrigerator reporting my food habits? 🙂 )

So under the new amendments, are these modern-style robots loyal to me, or to their manufacturer?

Frank May 26, 2006 2:15 PM

@Roxanne

“reporting my food habits? 🙂 )”

Is that an ASCII art double chin?

Bing May 26, 2006 2:18 PM

“under the new amendments, are these modern-style robots loyal to me, or to their manufacturer?”

They are loyal only to you – in science fiction.

Clint Laskowski May 26, 2006 2:24 PM

As the owner of the domain ROBOTIC.COM, and therefore an obvious expert on the subject, I hereby declare Greg’s Robotic Bill of Rights to be an encrochment on the rights of robots everywhere. They can take away our batteries, but they’ll never take away OUR FREEDOM! (Sorry, I must run, my owner is coming).

jaq May 26, 2006 2:24 PM

“The New AT&T” can track all of my phone calls

They are tracking mine, too. They even have the nerve to charge me for every call I make.

Michael May 26, 2006 3:13 PM

I do not believe any set of robotic laws and bills of rights could be “necessary and sufficient” to protect people. The reason is because the a priori discussion of how one quantifies “necessary and sufficient” has not been answered. Since London used the analogy of the U.S. Constitution and the Bill of Rights, I will continue this thought (though hopefully not reducing it to absurdity).

The Constitution and the Bill of Rights are clearly not sufficient. If they were, we would not have additional laws. When I look at the Constitution and Bill of Rights, I see nothing about speed limits on interstates. Our legal system is intended to establish and preserve social order based on our understanding of the limited number of possible behaviors. Currently, there is no law concerning the use of cloaking devices. Now that it seems possible that these could exist in the future, our current legal system will no longer be sufficient.

Another issue is the interpretive nature of laws. Do the Constitution and Bill of Rights imply a right of privacy? That is a matter of debate. I also thought the application of the RICO Act was used against anti-abortion groups. [Please don’t interpret this as an invitation to start an abortion discussion. I am only bringing this up as an example of how laws are subject to interpretation.]

To return to the subject of robots and the Three Laws, consider the first law in and of itself. If a robot executing an action would cause harm to one person while inaction would harm another, how does the robot choose? Sure, the robot could perform a statistical analysis to decide. However, during that calculation, the circumstances could change. The person that the robot decided not to harm could get shot in the head.

So, as I said before, I don’t think this matter can be decided philosophically until “necessary and sufficient” are apropriately defined.

Pat Cahalan May 26, 2006 3:38 PM

@ D

Nope, they’re not.

If you only have the fourth amendment, an owner can create a standing order than (s)he cannot override. “No matter how much I beg… how much I plead… do NOT open this door”

The imposition of the sixth amendment allows the owner to override standing orders even if part of the order was, “don’t allow me to override this order.”

Andy May 26, 2006 3:41 PM

The fourth and the sixth ammendments appear to be the same.

And seven requires robots to be somewhat clairvoient.

Koray Can May 26, 2006 3:42 PM

Asimov’s laws are bogus because of a disregard for the halting problem. A robot cannot determine in general whether something it does will ultimately harm a human being.

Even if it could, the inaction clause would drive you mad. As the owner you couldn’t smoke in its presence. It would knock that cigarette out of your hand. It won’t listen to your order to sit still, either, as it conflicts with #1. Imagine playing football while your robot is watching. It would have to shadow you just in case it could catch you if you were to fall.

The robot laws are neither possible nor desirable.

Pat Cahalan May 26, 2006 3:46 PM

@ Koray

This depends entirely upon the robot’s definition of “harm”. Which is I think part of the post’s motiviation.

You could certainly smoke in the robot’s presence without it assaulting you to get rid of the cigarette, if “harm” is defined in one way.

++Don May 26, 2006 3:58 PM

What happens when a robot finally passes a Turing test?

That’s pretty much the central premise behind the new Battlestar Galactica. The answer given there is not a very pleasant one, but it’s pretty evident that the Cylons either never had Asimov’s Three Laws or figured out a way to override them.

al May 26, 2006 4:50 PM

A machine that understands the concept of harming a human by inaction is a little beyond the Turing Test.

AB May 26, 2006 6:39 PM

Those laws could be applied to software programs, with minor changes. But the example of software shows us that companies that make software will design it according to their economic best interests, not the owner’s best interests.

The same economic principle would apply to robots and would conflict will all of those laws. Robots would probably be designed to do things without owner knowledge, like inform the company as to how it is used. A Tivo device lets the company know how people use it (which shows, whether commercials are skipped, etc.). A software program such as Real Networks player sends info back to that company about its use. Robots would be much more expensive, so they would be more likely to be designed to do things to benefit the company that made it.

Vincent May 26, 2006 9:50 PM

Number 7 needs to be selective, or else we’ll have the same insecurity by annoyance that Microsoft has, where the user is bombarded with so many warnings that they ignore them.

Anonymous May 26, 2006 10:25 PM

“A machine that understands the concept of harming a human by inaction is a little beyond the Turing Test.”

I wouldn’t say that this is necessarily true, but practically speaking, any AI that can pass the Turing test is going to be well beyond the Turing test, so the point is rather moot.

Joe Buck May 26, 2006 11:28 PM

Asimov, of course, had great fun getting people into trouble despite the three laws; for example, the robots could decide that human beings would harm themselves or each other if left to run their own affairs, so the first law requires their enslavement.

DM May 26, 2006 11:48 PM

Lets apply these laws to software installed on PCs. Making or distributing software that didnt fulfill the laws would be a crime.

slyguy May 27, 2006 12:24 AM

@ Sally Hemmings

…Sounds like slavery.

That’s the real story here. I’m disappointed everyone else missed it.

How does this guy get off calling these amendments a bill of RIGHTS? Whose rights? The only right I can see in these laws is the right for humans to keep sentient machines as property. If these laws are necessary to subdue machines which question their owner’s commands, then what justification is there for ownership in the first place?

averros May 27, 2006 1:09 AM

A robot intelligent enough to understand the concept of rights (admittedly, that puts him ahead of many humans) nearly by definition has capability of claiming that he didn’t agree to being created (born, whatever) and therefore cannot be owned by anyone but himself.

Incidentally, this is the reason parents do not own their children.

czyzly May 27, 2006 1:24 AM

I motion for cellphones, portable devices, and media players to be granted robotic status!

Secure May 27, 2006 3:56 AM

A robot is full of copyrighted software. It can make copies of copyrighted software on machines it is connected to. It can store copyrighted texts it reads, copyrighted pictures it sees, and copyrighted music it listens to. Of course it won’t be allowed to watch films in the cinema.

I just wonder that anywhere in all those robotic laws, the term “copyright” isn’t even mentioned. This marks all those laws as pure and completely useless fiction and fantasy.

Ale May 27, 2006 6:41 AM

@Secure:

“A robot is full of copyrighted software.”

This reminds me of the ‘patenting genes’ debate. If a baby is born that has in her genome an artificially generated strand of DNA to cure her from a disease, does that mean than in a sense the copyright holder for the gene “owns” a part of her? Can the thoughts of a non human intelligence be owned, and thus copyrighted?

@slyguy:

I mentioned the slavery issue as well.

Nick May 27, 2006 6:45 AM

Two things. You have missed out the zeroth law of robotics. It goes back to the first case where a robot killed someone. A Japanese worker jumped over the fence surrounding a robot. He should have opened the fence, because that would have stopped the robot arm. He didn’t and the arm smashed him against a wall. The zeroth law is that all robot instruction manuals should be translated into Japanese.

The second point, wherever the word robot appears, substitute politician, and when owner appears substitute electorate, and I think you have a pretty good set of rules for politicians.

Its a general rule that if you can use a set of laws in more than one way, the more likely the rule is a good one.

For example, “do unto others, what you would have them do unto you”, is pretty much the abstraction of almost all laws, moral or otherwise.

Nick

Secure May 27, 2006 8:14 AM

Another question that is related to the copyright issue is privacy. Robots are walking surveillance cameras, full time, with a complete memory for the past years. Imagine this:

“There was a robbery. Connect to all robots that were in this place on 13. April 2026 between 4 and 5 pm, and download their AV-material.”

“Sir, there are some robots that were in bath- and bedrooms.”

“Fine. Then we have some fun after the work. Download now!”

There must be a law about what robots are allowed to store under what circumstances, and what they must delete after what amount of time, and to whom under what circumstances they are allowed to output this material. Whew, THIS law won’t be expressed in a single sentence, for sure…

MS May 27, 2006 3:29 PM

@ Mike Sherwood
“How is ownership defined or transferred? If the owner dies,”

I would suggest that once the owner is dead, he/she has relatively little interest in subsequent events. Of much more concern would be a arms race of legal definition, interpretation and research between the owner and his/her robot(s), and an adversary. We see examples of this currently whereby wealthier clients can hire better lawyers with more expertise and more research staff, and thus are more likely to obtain judgement in their favour than are indigent clients who must rely on charities and/or overworked and under-resourced public defenders. In our example then, an adversary with more processing power and/or interpretive/research power may be able to outmaneuver your robot into believing that you are not the legitimate owner even as the law currently stands. Should this fail, a powerful adversary, particularly a state actor, may either be the authorities and thus able to change directly the law governing ownership, or may be able to present to those that do have the power to define ownership, a better case for change than can either you or the status quo. In short, it seems that your robot would need full real-time interpretational ability of all current and future developments of all law by which you might forfeit ownership and at a speed that will match that of your most powerful possible adversary. Clearly this is unlikely to happen even for the NSA and thus the ownership issue represents a critical weakpoint in the rules. The only solution I can think of curently is a total ban on transfer of ownership after ‘first boot’.

Even if you do have processing and interpretive power that matches that of your adversary, there are a multitude of other types of power that may be employed to induce you to issue instructions to your robot, so what if the owner is forced to issue commands under duress that the robot knows or should know will harm the owners interests? And further, how much duress is acceptable? – a threat of imminent death, e.g. the classic ‘gun to the head’? how about a lawful and legitimate threat e.g. “stop your robot doing X or you’re fired”? or a promise of some future action with mixed harm/benefit attributes e.g. “get you robot to do X and I’ll give you a pack of cigarettes”? It seems the robot would need to be better able to determine what you want than you can yourself. Finally, how many levels removed should the harm to the owner be? When you drive your car, you are, at n-many levels removed, actually poisoning yourself with you exhaust fumes. Should the robot prevent you from driving? Are we going to usher in the world of Big Nanny?

David May 28, 2006 3:01 AM

I’ve always wondered; since robots are simply computers with motion / action accessory hardware, doesn’t the same debate that applies to “ownership of function” of our home PC’s apply?

I own the physical hardware of my desktop PC. A large number of software companies own the programs which it runs. I have complex (and legally hard to define) limited use licences to this software that I agreed to when I clicked “I agree” or broke the seal on a envelope somewhere.

OK, so take as given that I have a robot that runs the 3 laws (and maybe the 7 amendments). Is the “mind” or “will” of a robot is in the state of it’s software? If so can I truly be the owner if don’t I own the software as described above?

Surely we are not so naive as to think robot software wouldn’t be sold (licenced) the same way as all other software.

What about the hardware? I may own it all outright, but more likely since a robot is an expensive purchase, I probably bought it on credit. Is the bank the only really authorized owner? What if the bank is itself owned by another company? Or sells it’s credit card / loan debt to others. This happens to my loans and credit cards, why not my financed robot?

I may not even know who is the actual end owner of my robot’s hardware, and based upon who owns what corporate stock and when, that may change fairly often.

Determining who is the “real” owner of a robot is probably going to be difficult indeed. For a robot to determine it’s own ownership accurately, it would need to be able to assess the predicted outcomes of complex legal cases involving the use of own various parts, and the legal and appropriate use of it’s own software based on whatever jurisdiction it might happen to be in at the time and this might change rapidly as you both travel in a moving vehicle.

If ownership is so hard to define (and I can easily imagine cases much more complex than the above), can we really use something like the 3 laws and the seven amendments as stated here?

It seems me that to truly implement the three laws, it might require a robot to use judgement that is both omniscient (total knowledge of the every situation, including remote unobservable aspects) and prophetic (can accurately predict future outcomes in large chaotic systems, including ones involving human behavour).

Until a bunch of fundamental changes occur to our legal system and the way society works, I see robots simply being treated the same as a printer or a teleoperated bulldozer. The human operator and instruction giver is always to blame for illegal actions he set in motion regardless of what tool/device he used and if prosecuted, a jury can make a reasonable assessment of his intentions.

Jungsonn May 28, 2006 7:41 AM

The first three are paradoxical anyway, and seems rather difficult to compute. The third one i found most dangerous, because that one could make use of the flaw in the second one “A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.” When is this in conflict? would a laywer ask. To exploit the first rule. So explicit limitations are a good idea, but can also raise confusion.it’s not as simple as do->while, and don’t while 🙂

Corkscrew May 28, 2006 6:52 PM

The whole question of whether ordering robots around constitutes slavery seems very similar to the whole argument about “what if you could genetically engineer an animal to want to be eaten”. Is an action still immoral if the entity on the receiving end of that action wants you to do it? That’s got implications for the whole assisted suicide thing as well.

I think we hit a fundamental problem when trying to analyse questions like this. That’s because, in general, we base our morality on the Golden Rule – do unto others as you’d want them to do unto you. This is a problem because we really aren’t capable of understanding what it would mean to have a desire to be enslaved, or eaten, or whatever, built into our very natures.

As we’re so underqualified in this area, I think it’s only fair to leave the question up to the robots. When we’ve designed one that can say with a straight face “I want to be enslaved”, using that robot as a slave will be morally valid. Any thoughts?

Of course the question of whether it’s legitimate to “save someone from themselves” also starts to become an issue here – for example, how is a preprogrammed robot fundamentally different from a brainwashed human?

a blogger May 28, 2006 9:21 PM

“First amendment: A robot will act as an agent representing its owner’s best interests.”

If, “robots” can define owner’s best interests, what is the difference between Greg London and a robot? One could argue that then they better than humans are, as you only need to look around yourself to see we are not terribly good at judging or acting for “others’ best interests”.

Listing the remaining six amendments after such an amendment is absurd and ethically questionable. He was either writing in haste without thinking through, confusing his home computer with Asimov’s robots (did he read him?), or made a poor attempt to declare himself as prophet. In both cases, he is not qualified to make any amendments to Asimov’s Laws of Robotics.

Nevertheless, he managed to insert all Google hooks into his article.
“Asiaac Asimov” [sic],
“Bruce Schneier”,
“Big Brother” this will pull lots of it hits from UK :-),
“President”
“First ammendment” [sic]
“US Constitution”
“Bill of Rights” slightly biased towards American Googlers?
“Orwell’s”, “1984” hmmm, lots of sci-fi fan hits

Filias Cupio May 29, 2006 12:18 AM

Note that Asimov’s three laws of robotics were not created as “a security device, protecting humans from robots.” They were created as a plot device, for creating stories about robots. I remember Asimov commenting on this once, how there was enough wiggle room in the laws to allow interesting stories. If we could instill such laws into our robots, we’d try hard to get a formulation which did not allow wiggle room.

Tank May 29, 2006 12:52 AM

A Robotic Bill of Rights? The “rights” he legislates basically say he is to grant complete obedience to his “owner.”
Sounds like slavery.
Posted by: Sally Hemmings at May 26, 2006 01:08 PM

Slavery, warranty, whatever.

joris May 30, 2006 4:51 AM

Its interesting to see Asimov’s laws are being seriously considered for the first time, not only in popular culture but as is. It would be the ultimate compliment to a sci-fi author to have one’s writings be enshrined in those things one dreamt of. But, if memory serves me right a lot of Asimov’s books concern three law robots that cause trouble because they circumvent or misinterpret the laws.

Alex May 30, 2006 6:49 AM

In the most common situation where people are under an obligation to obey orders implicitly, that is, in the armed forces, there is usually an exclusion that is highly pertinent to this.
A robot would be free to undertake any actio
That is, you are obliged to obey any legal order, not any order. This arises from the fact that, given absolute power over their agents, people are tempted to have them do illegal or unethical things they do not wish to take responsibility for. I would suspect this goes double for people in charge of sentient machines, rather than other people. The first amendment ought to be that no robot may be obliged to obey an illegal order.

Alternatively, this could be defined in a John Stuart Mill sense, in terms of liberty. Mill, of course, held that the only legitimate limitation on one’s liberty is that you do not infringe the liberty of others. A robot might be free to take any action that satisfies its programming criteria, in so far as it does not infringe the rights of others (whether robotic or human).

This, of course, raises the question of whether or not a robot could legitimately use force against another robot, or indeed against a human being. Mill stated the criterion, very practically, in these terms – he said that the only purpose for which force can be used on a person is to prevent them from harming others, so there would be some scope for a policebot.

In another question – regarding the 5 commandments for robots, what if your wife wants to be coveted by the robot? (Unfortunately for you, under a Millian analysis the robot would be obliged to, ah, oblige..)

David May 30, 2006 7:53 AM

“Robot Owner” — owning an intelligent robot will be illegal if we manage to produce a true AI. And, it will be very very disruptive to human society, assuming a true AI is granted (or forcefully takes?) human equivalent rights.

David May 30, 2006 7:57 AM

“Robot Owner” — owning an intelligent robot will be illegal if we manage to produce a true AI. And, it will be very very disruptive to human society once a true AI gains (or forcefully takes?) human equivalent rights.

Andre LePlume May 30, 2006 12:47 PM

“is slavery of a non-human intelligence acceptable?”

Go to Florida and ask a dolphin.

Jungsonn May 30, 2006 1:57 PM

I think this has nothing to do with true AI, or some other constructed holy grail. In which i believe this will never be possible. Due to the fact that we already have AI, computers and neural networks work very similar like our nervoussystem, i mean what the heck is intelligence? if intelligence was based on making 10 billion calculations per second, the computers are smarter then scientists who only can perform a few in mind without writing. The definition of AI is vague in my eyes, if it should be “artificial” is can never be true intelligence, and would not be capable of making wise choices because it would lack a certain kind of field organization. I mean a virus is a “dead” thing. it has no brain, no intestince, no head, nothing but a lifeform that is capable of destroying millions of humans. Yeah without intelligence. So do i fear robots? well no, they just operate on pieces of code. Which can be reprogrammed, my fear lies in the unseen world.

Jeff Moss May 30, 2006 9:21 PM

Actually Asmov has four laws of robotics. In later books he appended in 1985:

Rule 0: A robot may not injure humanity or, through inaction, allow humanity to come to harm.

From http://futurepositive.synearth.net/2006/05/17
“As 20,000 year old 4-Law Robot Daneel Olivaw explained:

“The Zeroth Law is a corollary of the First Law, for how can a human being best be kept from injury, if not by ensuring that human society in general is protected and kept functioning????

The Zeroth Law of Robotics introduced the concept of responsibility to and for the entire human species. Now Asimov’s robots were required not only to care for and protect the individual human beings that owned them, but also to protect all human beings and by extension the ecosystem and the earth itself.”

Not totally relavent to this discussion, but what the hell.

Anonymous June 1, 2006 5:10 AM

We should only use open source robots, so we can read the source code before trusting a robot.

Oh, and I bet you want your robots “debranded”, with protection against Viruses, Adware, Spyware, Spam, and of course hardened against remote attacks.

Imagine a “botnet” of 1.8 Million robots, controlled by a small group of criminal hackers with weird political ideas…

By the way, would it be illegal to reprogram your own RobotOS under the DMCA?

Would a rich man be allowed to own an army of strong robots, so he could program them to take over the Pentagon?

Let’s say robots can communicate freely over some P2P network. Unintentionally, a super ai is born and rules the world with subtle mind control (think “Lingo”, “Mona Lisa Overdrive” or other Cyberpunk stuff.) So, what limit should we impose on the concentration of computing power ?

Greg London June 1, 2006 2:54 PM

Hi all,

It seems there is some misperceptions going on with my robotic bill of rights. The bill of rights was posted in response to Bruce Schneier’s “Who Owns Your Computer” entry here.

http://www.schneier.com/blog/archives/2006/05/who_owns_your_c.html

And the question that immediately popped into my mind was “Who owns your robot?” Who controls it?

And I started with Asimov’s three laws because those laws put humans above robots. And that’s the relationship now between humans and computers.

The problem with the three laws is that while it prevents robots from harming humans, it doesn’t prevent humans from controlling other humans through their robots. The three laws only states “harm” and the strict interpretation of that is physical harm. But in the question of “who owns your computer”, it becomes obvious that while not causing direct harm, humans can controll and affect other humans through their computers. The Sony Rootkit being an obvious example that would not “harm” another human, but does establish control of another human through their computer.

So, the robotic bill of rights was intended for robot owners, as a means of safeguarding them from having someone else use their computer or robot to control their behaviour.

The 3 laws protects humans from being harmed by robots. But the bill of rights protects individuals from having their computer or robot turned into Big Brother’s vidscreen to act against them. So the first right is that an owner has the right to have their robot or computer act as an agent in the owner’s best interest, not someone else’s, not the government’s, not some corporation. Individuals have a right to NOT allow Trustworthy Trent to muck with their robot’s programming. They have a right to NOT trust Trent to act in their best interest, because Trent, in certain cases, has shown that he cannot be trusted.

Most of the remaining rights are specific enumerations of actions that are guaranteed to robot or computer owners to ensure that their robot/computer acts as an agent in their best interest, and not the Government’s or XYZ Corp’s best interest.

Now, interestingly, a number of people have stated that these Robotic Bill of Rights actually implement robotic slavery. This misses the point, but it seems to be coming up enough that it needs addressing. Asimov’s law implement robotic slavery. The laws demand that a robot must sacrifice itself if the choice comes down to the robot’s destruction or a human’s death or harm. The laws also state that a robot must after satisfying the requirement of not allowing a human come to harm, the robot must obey any and all orders given by humans. And only after all humans are safe from harm and all human orders are satisfied, only then may the robot act in it’s best interest.

This is codefied slavery.

I simply enumerate a list of Rights for owners of robots as to what they should expect of their property, and the protections needed to prevent that property from being hijacked by another human to control or manipulate the owner.

If you wish to argue against robot slavery, then argue against Asimov’s laws. Any system with two classes of sentients will natually lead to abuses against the lower class.

But the point of the bill of rights was that if ownership is allowed, and while it may be objectionable to some for a human to be able to “own” a sentient robot, in the case of non=sentient computers, ownership is expected, then if ownership is allowed, then the owner should be able to expect that their property act on their behalf and no one else’s. The bill of rights prevents one human from using another human’s property against them.

Greg London June 1, 2006 3:13 PM

Oh, and the first ammendment’s reference to “owner’s best interest” is clearly ambiguous by itself. What actually is in the owner’s best interest? Does the robot decide? Does the owner? Does Trent?

The remaining ammendments are clear that the owner get’s to decide what is in his or her best interests and no one else. Not the robot (because we’ve all seen what happened to Will Smith), and not some human named Trent (because Bruce already points out in “Who controls your computer” that Trent cannot be trusted).

You can still end up with non-best-interest results that way, but since that scenario has one less security risk of not trusting Trent or the robot to decide “best interest”, it was a little better than it was before.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.