Traffic accidents are a major
source of death and injury in the world. As technology improves, automated
vehicles will outperform their human counterparts, saving lives by eliminating
accidents caused by human error. Despite this, there will still be circumstances
where self-driving vehicles will need to make decisions in a morally
challenging situation. For example, a car can swerve to avoid hitting a child
that has run into the road but in doing so endangers other lives. How should it
be programmed to behave? An ethics commission initiated by the German Ministry
for Transportation has created a set of guidelines, representing its members'
best judgement on a variety of issues concerning self-driving cars.
These expert judgments may,
however, not reflect human intuition. Researchers designed a virtual reality
experiment to examine human intuition in a variety of possible driving
scenarios. Different sets of tests were created to highlight different factors
that may or may not be perceived as morally relevant. Based on a traditional
ethical thought experiment, the trolley problem, test subjects could choose
between two lanes on which their vehicle drove at constant speed. They were
presented with a morally challenging driving dilemma, such as an option to move
lanes to minimize lives lost, a choice between victims of different age, or a
possibility for self-sacrifice to save others. It revealed that human intuition
was often at odds with ethical guidelines.
More information: