Teaching Robots “Manners”: Digitally Capturing and Conveying Human Norms

Home / Articles / External / Government

September 11, 2017 | Originally published by Date Line: September 11 on

Researchers develop methods to help machines display appropriate social behavior in interactions with humans.

Advances in artificial intelligence (AI) are making virtual and robotic assistants increasingly capable in performing complex tasks. For these “smart” machines to be considered safe and trustworthy collaborators with human partners, however, robots must be able to quickly assess a given situation and apply human social norms. Such norms are intuitively obvious to most people—for example, the result of growing up in a society where subtle or not-so-subtle cues are provided from childhood about how to appropriately behave in a group setting or respond to interpersonal situations. But teaching those rules to robots is a novel challenge.

To address that challenge, DARPA-funded researchers recently completed a project that aimed to provide a theoretical and formal framework for what norms and normative networks are; study experimentally how norms are represented and activated in the human mind; and examine how norms can be learned and might emerge from novel interactive algorithms. The team was able to create a cognitive-computational model of human norms in a representation that can be coded into machines, and developed a machine-learning algorithm that allows machines to learn norms in unfamiliar situations drawing on human data.

The work represents important progress towards the development of AI systems that can “intuit” how to behave in certain situations in much the way people do.