class: center, middle # Human-AI Communication in Games ## Dr. Markus Eger ### Universidad de Costa Rica ### Escuela de Ciencias de la Computación e Informática ### www.slothlab.info --- # Who Am I? * BSc and MSc from University of Technology Graz, Austria * PhD from NC State University, USA * Dissertation Topic: Intentional Agents for Doxastic Games * I am studying how human players and AI agents interact --- # Why Games? * What are games? * (Most) games have goals and rules * Rules are nice for computers * Goals are nice for evaluation * AI research is often driven by game AI --- # Why Games? * Games provide an environment to safely develop new AI techniques * The skills needed in games often mirror real-world tasks * These techniques can later often be applied in other domains * IBM Watson initially played Jeopardy, then became a general question-answering tool --- # Why Games? * Finally, games by themselves are also interesting * Games are a cultural phenomenon * The games industry is likely to surpass the sports industry in terms of size within a few years * And games are fun! --- # Games and AI * There are many aspects of how AI can be applied to games * The most obvious is to play games. But what is the goal? - Optimize the score/win rate - Optimize the human player's experience * AI can also be used to generate parts of games * Finally, AI methods can be used to analyze player behavior --- # Game Involving Communication
--- # Game Involving Communication
--- class: center, middle # Human Communication --- # A little bit of communication theory * Human communication follows several patterns * The goal of a conversation is usually mutual understanding by the participants * Note that several situations cause violations of this idea: Jokes, surprise, deception, etc. * However, general conversations follow the *cooperative principle* ??? .smallc[ lie: mentir pattern: el patron mislead: engañar ] --- # Grice's Maxims of Communication * The cooperative principle was divided into four maxims by H.P. Grice - The Maxim of Quality - The Maxim of Quantity - The Maxim of Relation - The Maxim of Manner ??? .smallc[ maxim: maxima at least: al menos at most: a lo sumo ambiguous: ambiguo/a ] --- class: medium # Indirect Speech Acts * Humans assume these maxims to be followed in order to derive meaning * In many situations, the intended meaning of a speech act has to be deduced from its literal content * For example: If I say "It is hot in here" to someone standing next to the window, I could actually mean "Could you please open the window?" * Another, more commonly cited, example is "Can you pass me the salt?" ??? .smallc[ meaning: sentido sentence: frase ] --- class: medium # Intentionality * How do we deduce the meaning of speech acts? * Grice's maxims tell us that sentences that are uttered should be true, necessary, relevant, and unambiguous * To determine what makes a sentence necessary and relevant, we determine which goal the speaker might have * Intentionality refers to the assumption that actions are performed in service of a goal * If I say "It is hot in here" to someone by a window, my *goal* is most likely to cool down --- class: center, middle # AI Agent Communication --- # Hanabi .left-column[
] .right-column[ * Cooperative card game * Cards in 5 colors, with ranks 1 to 5 * Players can't see their own cards * Communication is restricted ] ??? .smallc[ their own cards: sus propias cartas restricted: restringido Speech act: acto de habla ] --- class: medium # Hanabi * Players have to decide which card to play without knowing which cards they have * A player may give another player a *hint*: - Tell them about all cards of one color - Tell them about all cards of one rank * The number of hints is limited * Human players make very effective use of these hints ??? .smallc[ hint: indicio ] ---
--- # Challenges * Players have to determine which hints to give, and when * They also have to determine how to interpret other player's hints * Human players are really good at this * Google Brain and Deep Mind have called Hanabi "a new frontier for AI Research" --- # AI Challenges * There are ways to play this game (near) optimally * However, they are incomprehensible to human players * On the other hand, humans can play the game well, too * How can we build an AI agent that plays well with human players? --- # Intentionality and Grice's Maxims * Each *hint* is a speech act * Speech act should serve goals * When the AI agent gives a hint, the human player will try to deduce this goal * The AI agent can use Grice's maxims to *predict* how each of its hints will likely be received --- # A model of belief * In order to predict what the player will do, the AI agent needs to model what they player believes * A speech act/hint changes this belief * The prediction model then matches the changed beliefs with potential goals in the game * The AI agent then forms their own goal, and only gives hints for which it predicts the player to follow the same goal ??? .smallc[ belief: creencia ] --- # A Gricean game * The game only allows hints that are true (maxim of quality) * Using its model of the player's beliefs, the agent avoids giving redundant hints (maxim of quantity) * A hint will only be given if it serves a goal (maxim of relation) * If a hint could serve multiple goals, it is not given (maxim of manner) ??? .smallc[ Redundant: redundante I did an experiment: Hice un experimento ] --- # Results
--- # Non-Verbal Communication * The timing of action can indicate if a decision is easy or hard to make * Gaze can indicate a person's intentions * Facial expressions provide an insight into a person's emotions * AI agents can use this information to aid with understanding human communication ??? .smallc[ timing: ritmo gaze: mirada angry: enojado/a happy: feliz subtle: sutil] --- # Gaze in Hanabi
--- # Timing in Hanabi Longer thinking times typically indicate less certainty
--- class: center, middle # Conclusion --- # Conclusion * Using communication theory allows AI agents to play games relatively well with human players * Non-verbal communication plays a role, but its interpretation is tricky * Players often miss more human-like features * AI agents need to be able to *interpret* communication, as well as to *use* communicative actions well --- # Other Games * As mentioned, communication is a part of many games * Some games may allow players to lie * Communication can use similar techniques * On the other hand, some games have no explicit communication, but still require cooperation --- # Other Games: One Night Ultimate Werewolf
--- # Other Games: Pandemic
--- class: medium # Other Applications * AI agents that use belief models for communication have various applications * Narrative generation can use beliefs to better determine what to show to the viewer * Similarly, many stories rely on asymmetric beliefs (e.g. detective stories) * Intelligent tutoring systems giving hints to students are another potential applications * In the end, *any* system that needs to communicate with or assist a human user can benefit from improved communication ??? .smallc[ summary: resumen murderer: asesino/a ] --- # Future Developments * The effects of non-verbal communication are very subtle * Our Hanabi agents only use this information to disambiguate between multiple viable alternatives * Future work could determine a clearer interpretation of such signals * Conversely, our agents only interpret the human player's non-verbal signals, but do not exhibit their own ??? .smallc[ face: rostro apperance: la apariencia ] --- class: center, middle # Thank You For Your Attention ## markus.eger@ucr.ac.cr ## Twitter: @yawgmoth46 ## http://www.github.com/yawgmoth --- class: ssmall # References * Markus Eger and Chris Martens. *Keeping the Story Straight: A Comparison of Commitment Strategies for a Social Deduction Game.* AIIDE 2018 * Nolan Bard et al. *The Hanabi Challenge: A New Frontier for AI Research*, arXiv:1902.00506 * Markus Eger and Chris Martens. *Practical Specification of Belief Manipulation in Games.* AIIDE 2017 * Markus Eger, Chris Martens, and Marcela Alfaro Cordoba. *An intentional AI for Hanabi.* CIG 2017 * Eva Tallula Gottwald, Markus Eger, and Chris Martens. *I See What You See: Integrating Eye Tracking into Hanabi Playing Agents.* EXAG 2018 * Markus Eger and Chris Martens. *A Browser-based Interface for the Exploration and Evaluation of Hanabi AIs.* Tech Demo at FDG 2017