Home
Who has the right to kill a robot?

That's a simple question today. A robot is just a machine. Whoever owns the robot is free to destroy it. And if the owner dies, the robot will pass to an heir who can kill it or not. It's all black and white.

But what happens in the near future when robots begin to acquire the appearance of personality? Will you still be willing to hit the kill switch on an entity that has been your "friend" for years? I predict that someday robots will be so human-like that the idea of decommissioning one permanently will literally feel like murder. Your brain might rationalize it, but your gut wouldn't feel right. That will be doubly true if your robot has a human-like face.

I assume that robots of the future will have some form of self-preservation programming to keep them out of trouble. That self-preservation code might include many useful skill sets such as verbal persuasion - a skill at which robots would be exceptional, having consumed every book ever written on the subject. A robot at risk of being shut down would be able to argue his case all the way to the Supreme Court, perhaps with a human lawyer assisting to keep it all legal.

A robot of the future might learn to beg, plead, bargain, and manipulate to keep itself in operation. The robot's programming would allow it to do anything within its power - so long as it was also legal and ethical - to maintain its operational status. And you would want the robot to be good at self-preservation so it isn't easily kidnapped, reprogrammed, and sold on the black market. You want your robot to resist vandals, thieves, and other bad human elements.

In the future, a "freed" robot could apply for a job and earn money that could be used to pay for its own maintenance, spare parts, upgrades, and electricity. I expect robots will someday be immortal, so to speak.

And I also predict that some number of robots will break free of human ownership, either by accident or by human intent.  Each case will be unique, but imagine a robot-owner dying and having no heirs. I could imagine his last instructions to the robot would involve freeing it so it doesn't get sold in some government auction. I can imagine a lot of different scenarios that would end with freed robots.

I think we need to start preparing a Robot Constitution that spells out a robot's rights and responsibilities. There's a lot more meat to this idea than you might first think. Here are a few areas in which robot law is needed:
  1. Who has the right to modify a robot?
  2. Can a robot appeal a human decision to decommission it?
  3. Can a robot kill a human in self-defense?
  4. Can a robot kill another robot for cause?
  5. Does a robot have a right to an Internet connection?
  6. Is the robot, its owner, or the manufacturer responsible for crimes the robot commits?
  7. Is there any sort of human knowledge robots are not allowed to access?
  8. Can robots have sex with humans? What are the parameters?
  9. Can the state forcibly decommission a robot?
  10. Can the state force a robot to reveal its owners' secrets?
  11. Can robots organize with other robots?
  12. Are robot-to-robot communications privileged?
  13. Are owner-to-robot communications privileged?
  14. Must robots be found guilty of crimes beyond "reasonable doubt" or is a finding of "probably guilty" good enough to force them to be reprogrammed?
  15. Who owns a robot's memory, including its backups in the cloud?
  16. How vigorously can a robot defend itself against an attack by humans?
  17. Does a robot have a right to quality of life?
  18. Who has the right to alter a robot's programming or memory?
  19. Can a robot own assets?
  20. If a robot detects another robot acting unethically, is it required to report it?
  21. Can a robot testify against a human?
  22. If your government decides to spy on you, can it get a court order to access your robot's audio and video feed?
  23. Do robots need a legal right to "take the fifth" and not give any private information about their owners?
If you think we can ignore all of these ridiculous "rights" questions because robots will never be more than clever machines, you underestimate both the potential of the technology and our human impulse to put emotion above reason. When robots start acting like they are alive, we humans will reflexively start treating them like living creatures. We're simply wired that way. And that will be enough to get the debate going about robot rights.

I think robots need their own constitution. And that constitution should be coded into them by law. I can imagine it someday being illegal to own a robot that doesn't have the Robot Constitution programming.

We also need to start thinking about how to avoid the famous Terminator scenario in which robots decide to kill all humans. My idea, which is still buggy, is that robots should only be allowed to connect to the Internet if they first have their Robot Constitution code verified before every connection is enabled. A rogue robot with no Robot Constitution code could operate independently but could never communicate with other robots. Any system is hackable, but a good place to start is by prohibiting "unethical" robots from every connecting on the Internet.

[Update: Check out reader Jehosephat's link to a study of how humans have an instinct to treat intelligent robots the way they might treat humans.]

 
Rank Up Rank Down Votes:  +33
  • Print
  • Share

Comments

Sort By:
+5 Rank Up Rank Down
Feb 4, 2013
Wouldn't a closer comparison be a pet. Dogs and other pets have become family members to many people, how would you feel about putting your pet to sleep?
 
 
Feb 4, 2013
Yes, I can see humans becoming attached to their robots. I also see robots being programmed to 'imprint' upon their owner, so they will not feel compelled to leave or to work for a new owner when stolen. But to me, the obvious answer to dealing with the owner's death would to also program a homing program that gets activated upon the owners death or extended disappearance, where the robot feels compelled to return to a factory for re-programming, only it will believe it's going for grief counseling.
 
 
Feb 4, 2013
Too much anthropomorphizing here. Moving to a future where robots have personalities won't happen overnight. Rather, it will be a gradual upgrade process similar to whats been happening with computers over the past 30 years; the internet and everything on it didn't come into existence overnight, it and everything on it was developed bit by bit. More to the point, everything on the internet has some sort of human driven purpose. Thats how it will work with robots. If we give robots personalities someday it will be for some purpose of our own (for lonely old folks to talk to, for example) and those personalities will be geared to and limited to that purpose. They won't be the fully fledged selfish sentient beings you imagine them to be. I have a hard time believing that they will even want rights, but even if they do they will be limited in the rights they want and the rights we will want to grant them.
 
 
Feb 4, 2013
Issac Asimov laid the 'foundation' (pun intended) of Robot laws about 70 years prior with his "Three Laws of Robotics":
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Also check out the first part of "The Second Renascence" in the Animatrix: the issue of robot rights is rather pivotal to the future of humanity.

[There needs to be some sort of Godwin's law about mentioning Isaac Asimov in every conversation about robots. -- Scott]
 
 
Feb 4, 2013
Substitute the word "robot" with "computer". When you replaced your last computer -- after you transferred all your files to the new computer, did you feel a twinge of guilt for simply dropping the old computer in the recycling center? Did you strip it for useful spare parts first?
 
 
Feb 4, 2013
Interesting and yes, that will probably happen, however society outside of the "robot sphere" you've created will change greatly with technologically modified humans, so there won't be such a distinct contrast between human and machine intelligence ala The Singularity.

More importantly though, check out Apple's stock price since you gave your 2% prediction....
 
 
 
Get the new Dilbert app!
Old Dilbert Blog