The Problems With Isaac Asimov’s Three Laws Of Robotics

The science fiction author Isaac Asimov created a list of three laws that should apply to all robotics. Herein, the laws will be examined as well as potential issues that may arise despite them.

Isaac Asimov’s Three Laws Of Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by a human being, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The Problems With Isaac Asimov’s Three Laws Of Robotics

The first law has a number of problems associated with it. For instance, if humans were going to injure them-self, would it be correct for robots to essentially force them into other actions? A good example is suicide, should a robot prevent human beings from killing themselves? Furthermore, there is no specification on what it means to allow a human being to come to harm. Even leaving the house could be seen as potentially able to harm a person, so allowing a person to leave a house would be seen as inaction. If the first law was in place then they may potentially imprison a human being so as to prevent that human being injured.

The second law has a plethora of problems associated with it as well. The first of these is that humans may potentially give robots an order to destroy themself. While this might be fine in the case of the Robot’s owner giving the order, it is sure to be irritating in the case of other people giving the order to someone else’s robot. Furthermore, people may not be able to get the robots to injure other human beings but they may be able to get them to be exceedingly irritating to them. For instance, this law does not restrict humans from using their robot to harass a rival of some kind.

The third law is generally more stable the other two, but also has its problems. For instance, if an animal attacks the robot, is it ethical for the robot to kill the animal in order to protect itself? There are also other more pressing examples, for instance a robot may protect itself by destroying particularly expensive property if said property poses a risk.

Related posts: