The engineering
of autonomous, morally competent robots might include a perfectly crafted
conscience capable of distinguishing right from wrong and acting on it. In the
near future, artificial intelligence entities might be better moral creatures
than we are, or at least better decision makers when facing certain dilemmas. Since
2002, the ethics of artificial intelligence was divided into two subfields: machine
ethics and roboethics.
Naturally, to be
able to create such morally autonomous robots, researchers have to agree on
some fundamental pillars: what moral competence is and what humans would expect
from robots working side by side with them, sharing decision making in areas
like healthcare and warfare. At the same time, another question arises: What is
the human responsibility of creating artificial intelligence with moral
autonomy? And the leading research question: What would we expect of morally
competent robots?
More
information: