Roboethics (robot ethics) is the area of study concerned with what rules should be created for robots to ensure their ethical behavior and how to design ethical robots. The purpose of roboethics is ensuring that machines with artificial intelligence (AI) behave in ways that prioritize human safety above their assigned tasks and their own safety and that are also in accordance with accepted precepts of human morality.
A pioneer in the field, the science fiction writer Isaac Asimov drafted The Three Laws of Robotics to guide the moral behavior of robots:
1. Robots must never harm human beings or, through inaction, allow a human being to come to harm.
2. Robots must follow instructions from humans without violating rule 1.
3. Robots must protect themselves without violating the other rules.
At the time, there were no such robots in existence to govern. Asimov also introduced the term robotics in his short story Liar!, which was published in May 1941 in “Astounding Science Fiction.”
More recently the British Standards Institute offered a premium article that is more fleshed out and is intended for the creators of robots to ensure their machines behave ethically. Document “BS8611: Robots and Robotic Devices” touches on many more topics.
BS8611 suggests that “Robots should not be designed solely or primarily to kill or harm humans; humans, not robots, are the responsible agents; it should be possible to find out who is responsible for any robot and its behavior.” Stipulations include the principle that their design must not allow for cultural, sexual or status discrimination. The document questions whether robots should be designed to foster emotional bonds in users and cautions of the possibility of rogue machines that change their own code.
Much of the concern that drove the need for these rules comes from questions inspired by artists and authors like Asimov and their works. As robots become increasingly autonomous and AI in many ways exceeds human capacities, the need for roboethics standards becomes more pressing.
Futurists and technological experts such as Elon Musk, Steve Wozniak and Steven Hawking have expressed concerns that if uncontrolled, robots could lead to the downfall of humans. More optimistic views include the hope that carefully designed robots could help the world recover from human-created problems.