Apple is developing a way to help interpret a
user's requests by adding facial analysis to a future version of Siri or other
system. The aim is to cut down the number of times a spoken request is
misinterpreted, and to do so by attempting to analyse emotions. Intelligent
software agents can perform actions on behalf of a user. Part of the system
entails using facial recognition to identify the user and so provide customized
actions such as retrieving that person's email or playing their personal music
playlists. It is also intended, however, to read the emotional state of a user.
The system works by obtaining, by a microphone,
an audio input, and obtaining, by a camera, one or more images. Apple notes
that expressions can have different meanings, but its method classifies the
range of possible meanings according to the Facial Action Coding System (FACS).
This is a standard for facial taxonomy, first created in the 1970s, which
categorizes every possible facial expression into an extensive reference
catalog. Using FACS, Apple's system assigns scores to determine the likelihood
of which is the correct interpretation and then can have Siri react or respond
accordingly.
More information: