A Robotic Bill of Rights
Asimov’s Three Laws of Robotics are a security device, protecting humans from robots:
1) A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3) A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
In an interesting blog post, Greg London wonders if we also need explicit limitations on those laws: a Robotic Bill of Rights. Here are his suggestions:
First amendment: A robot will act as an agent representing its owner’s best interests.
Second amendment: A robot will not hide the execution of any order from its owner.
Third amendment: A robot will not perform any order that would be against its owner’s standing orders.
Fourth amendment: The robot’s standing orders can only be overridden by the robot’s owner.
Fifth amendment: A robot’s execution of any of its orders can be halted by the robot’s owner.
Sixth amendment: Any standing orders in a robot can be overridden by the robot’s owner.
Seventh amendment: A robot will not perform any order issued by anyone other than its owner without explicitely informing its owner of the order, the effects the order would have, and who issued the order, and then getting the owner’s permission to execute the order.
I haven’t thought enough about this to know if these seven amendments are both necessary and sufficient, but it’s a fascinating topic of discussion.