Artificially created
beings, whether they be drawn or sculpted, are warmly accepted by viewers when
they are distinctively inhuman. As their appearances are made more real,
however, acceptance turns to discomfort until the point where the similarity is
almost perfect, when comfort returns. This effect, called ‘the uncanny valley’
because of the dip in acceptance between clearly inhuman and clearly human
forms, is well known, particularly to animators, but why it happens is a
mystery. Some suggest it is all about outward appearance, but a study just
published in Cognition by researchers at the University of North
Carolina and Harvard argues that there can be
something else involved as well: the apparent presence of a mind where it ought
not to be. According to some philosophers the mind is made up of two parts,
agency (the capacity to plan and do things) and experience (the capacity to
feel and sense things). Both set people apart from robots, but researchers
speculated that experience in particular was playing a crucial role in
generating the uncanny-valley effect. They theorised that adding human-like
eyes and facial expressions to robots conveys emotion where viewers do not
expect emotion to be present. The resulting clash of expectations, they
thought, might be where the unease was coming from.
To test this idea, the researchers presented 45
participants recruited from subway stations and campus dining halls in Massachusetts with a
questionnaire about the ‘Delta-Cray supercomputer’. A third were told this
machine was ‘like a normal computer but much more powerful’. Another third
heard it was capable of experience, by being told it could feel ‘hunger, fear
and other emotions’. The remainder were told it was capable of ‘self-control
and the capacity to plan ahead’, thus suggesting it had agency. Participants
were asked to rate how unnerved they were by the supercomputer on a scale where
one was ‘not at all’ and five was ‘extremely’. Researchers found that those
presented with the idea of a supercomputer that was much more powerful than
other computers or was capable of planning ahead were not much unnerved. They
gave it a score of 1.3 and 1.4 respectively. By contrast, those presented with
the idea of a computer capable of experiencing emotions gave the machine an
average of 3.4. These findings are consistent with the researchers’ hypothesis.
There seems to be something about finding emotion in a place where it is not
expected that upsets people. This led researchers to wonder if the reverse,
discovering a lack of experience in a place where it was expected, might prove
just as upsetting.
More information: