01 April 2020

Learning How People Interact With AI

Learning how people interact with artificial intelligence-enabled machines, and using that knowledge to improve people's trust in AI, may help us live in harmony with the ever-increasing number of robots, chatbots and other smart machines in our midst, according to a Penn State researcher. Researchers at Media Effects Research Laboratory proposed a framework, to study AI, that may help researchers better investigate how people interact with artificial intelligence, or Human-AI Interaction (HAII). The framework identifies two paths, cues and actions, that AI developers can focus on to gain trust and improve user experience. 


Cues are signals that can trigger a range of mental and emotional responses from people. The cue route is based on superficial indicators of how the AI looks or what it apparently does. There are several cues that affect whether users trust AI. The cues can be as obvious as the use of human-like features, such as a human face that some robots have, or a human-like voice that virtual assistants like Siri and Alexa use. Other cues can be more subtle, such as a statement on the interface explaining how the device works, as in when Netflix explains why it is recommending a certain movie to viewers. But, each of these cues can trigger distinct mental shortcuts or heuristics.

More information: