Robot Ethics

Robot ethics largely focuses on two main categories of issues:

  1. How humans should design, construct, deploy, and interact with robots or other artificial moral agents (AMAs)
  2. The moral status of AMAs.

Each of these can be thought about in relation to the main types of robots: industrial robots, military robots, social robots, and caregiving robots. The first category of issues raises such questions as:

  • How do humans react to robots?
  • What designs are more or less favored by humans?
  • What are the responsibilities of those who create robots?
  • Under what circumstances – or in what contexts – should robots be used?
  • What are the risks of robots being hacked?
  • How can these risks be mitigated?

The second category deals with the behavior of robots if and when they achieve independent decision-making abilities and status as moral agents. It remains unclear whether robots will ever be able to achieve full autonomy. Other questions include:

  • If robots achieve moral status, how should they be instructed, rewarded, and punished?
  • What does it mean to instruct, reward, and punish a robot?
  • What implications would robots as moral agents have for human beings?