Arrow Research search

Author name cluster

Jonathan Gratch

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

48 papers
2 author rows

Possible papers

48

AAAI Conference 2026 Conference Paper

Can LLMs Truly Embody Human Personality? Analyzing AI and Human Behavior Alignment in Dispute Resolution

  • Deuksin Kwon
  • Kaleen Shrestha
  • Bin Han
  • Spencer Lin
  • James Hale
  • Jonathan Gratch
  • Maja Mataric
  • Gale M. Lucas

Large language models (LLMs) are increasingly used to simulate human behavior in social settings such as legal mediation, negotiation, and dispute resolution. However, it remains unclear whether these simulations reproduce the personality–behavior patterns observed in humans. Human personality, for instance, shapes how individuals navigate social interactions, including strategic choices and behaviors in emotionally charged interactions. This raises the question: Can LLMs, when prompted with personality traits, reproduce personality-driven differences in human conflict behavior? To explore this, we introduce an evaluation framework that enables direct comparison of human-human and LLM-LLM behaviors in dispute resolution dialogues with respect to Big Five Inventory (BFI) personality traits. This framework provides a set of interpretable metrics related to strategic behavior and conflict outcomes. We additionally contribute a novel dataset creation methodology for LLM dispute resolution dialogues with matched scenarios and personality traits with respect to human conversations. Finally, we demonstrate the use of our evaluation framework with three contemporary closed-source LLMs and show significant divergences in how personality manifests in conflict across different LLMs compared to human data, challenging the assumption that personality-prompted agents can serve as reliable behavioral proxies in socially impactful applications. Our work highlights the need for psychological grounding and validation in AI simulations before real-world use.

JAAMAS Journal 2025 Journal Article

“Provably fair” algorithms may perpetuate racial and gender bias: a study of salary dispute resolution

  • James Hale
  • Peter H. Kim
  • Jonathan Gratch

Abstract Prior work suggests automated dispute resolution tools using “provably fair” algorithms can address disparities between demographic groups. These methods use multi-criteria elicited preferences from all disputants and satisfy constraints to generate “fair” solutions. However, we analyze the potential for inequity to permeate proposals through the preference elicitation stage. This possibility arises if differences in dispositional attitudes differ between demographics, and those dispositions affect elicited preferences. Specifically, risk aversion plays a prominent role in predicting preferences. Risk aversion predicts a weaker relative preference for salary and a softer within-issue utility for each issue; this leads to worse compensation packages for risk-averse groups. These results raise important questions in AI-value alignment about whether an AI mediator should take explicit preferences at face value.

EUMAS Conference 2020 Conference Paper

Challenges and Main Results of the Automated Negotiating Agents Competition (ANAC) 2019

  • Reyhan Aydogan
  • Tim Baarslag
  • Katsuhide Fujita
  • Johnathan Mell
  • Jonathan Gratch
  • Dave de Jonge
  • Yasser Mohammad
  • Shinji Nakadai

Abstract The Automated Negotiating Agents Competition (ANAC) is a yearly-organized international contest in which participants from all over the world develop intelligent negotiating agents for a variety of negotiation problems. To facilitate the research on agent-based negotiation, the organizers introduce new research challenges every year. ANAC 2019 posed five negotiation challenges: automated negotiation with partial preferences, repeated human-agent negotiation, negotiation in supply-chain management, negotiating in the strategic game of Diplomacy, and in the Werewolf game. This paper introduces the challenges and discusses the main findings and lessons learnt per league.

JAIR Journal 2020 Journal Article

The Effects of Experience on Deception in Human-Agent Negotiation

  • Johnathan Mell
  • Gale Lucas
  • Sharon Mozgai
  • Jonathan Gratch

Negotiation is the complex social process by which multiple parties come to mutual agreement over a series of issues. As such, it has proven to be a key challenge problem for designing adequately social AIs that can effectively navigate this space. Artificial AI agents that are capable of negotiating must be capable of realizing policies and strategies that govern offer acceptances, offer generation, preference elicitation, and more. But the next generation of agents must also adapt to reflect their users’ experiences. The best human negotiators tend to have honed their craft through hours of practice and experience. But, not all negotiators agree on which strategic tactics to use, and endorsement of deceptive tactics in particular is a controversial topic for many negotiators. We examine the ways in which deceptive tactics are used and endorsed in non-repeated human negotiation and show that prior experience plays a key role in governing what tactics are seen as acceptable or useful in negotiation. Previous work has indicated that people that negotiate through artificial agent representatives may be more inclined to fairness than those people that negotiate directly. We present a series of three user studies that challenge this initial assumption and expand on this picture by examining the role of past experience. This work constructs a new scale for measuring endorsement of manipulative negotiation tactics and introduces its use to artificial intelligence research. It continues by presenting the results of a series of three studies that examine how negotiating experience can change what negotiation tactics and strategies human endorse. Study #1 looks at human endorsement of deceptive techniques based on prior negotiating experience as well as representative effects. Study #2 further characterizes the negativity of prior experience in relation to endorsement of deceptive techniques. Finally, in Study #3, we show that the lessons learned from the empirical observations in Study #1 and #2 can in fact be induced—by designing agents that provide a specific type of negative experience, human endorsement of deception can be predictably manipulated.

AAMAS Conference 2019 Conference Paper

Recognising and Explaining Bidding Strategies in Negotiation Support Systems

  • Vincent J. Koeman
  • Koen V. Hindriks
  • Jonathan Gratch
  • Catholijn M. Jonker

To improve a negotiator’s ability to recognise bidding strategies, we pro-actively provide explanations that are based on the opponent’s bids and the negotiator’s guesses about the opponent’s strategy. We introduce an aberration detection mechanism for recognising strategies and the notion of an explanation matrix. The aberration detection mechanism identifies when a bid falls outside the range of expected behaviour for a specific strategy. The explanation matrix is used to decide when to provide what explanations. We evaluated our work experimentally in a task in which participants are asked to identify their opponent’s strategy in the environment of a negotiation support system, namely the Pocket Negotiator (PN). We implemented our explanation mechanism in the PN and experimented with different explanation matrices. As the number of correct guesses increases with explanations, indirectly, these experiments show the effectiveness of our aberration detection mechanism. Our experiments with over 100 participants show that suggesting consistent strategies is more effective than explaining why observed behaviour is inconsistent.

AAMAS Conference 2018 Conference Paper

Social Decisions and Fairness Change When People's Interests Are Represented by Autonomous Agents

  • Celso M. de Melo
  • Stacy Marsella
  • Jonathan Gratch

Recent times have seen an emergence of a new breed of intelligent machines that act autonomously on our behalf, such as autonomous vehicles, drones, personal assistants, etc. These machines introduce a new interaction paradigm where people instruct, or program, these agents to act on their behalf with others. Here we show that this act of programming changes the way people think about the situation, often leading them to adopt a broader perspective and act more fairly. We present four studies where participants made fairer decisions in ultimatum and negotiation tasks when engaging through an agent representative, when compared to direct interaction with others. These findings emphasize the importance of understanding the cognitive factors underlying people’s decision making when designing autonomous machines, if we wish to promote a fairer society.

AAMAS Conference 2017 Conference Paper

Grumpy & Pinocchio: Answering Human-Agent Negotiation Questions through Realistic Agent Design

  • Johnathan Mell
  • Jonathan Gratch

We present the Interactive Arbitration Guide Online (IAGO) platform, a tool for designing human-aware agents for use in negotiation. Current state-of-the-art research platforms are ideally suited for agent-agent interaction. While helpful, these often fail to address the reality of human negotiation, which involves irrational actors, natural language, and deception. To illustrate the strengths of the IAGO platform, the authors describe four agents which are designed to showcase the key design features of the system. We go on to show how these agents might be used to answer core questions in human-centered computing, by reproducing classical human-human negotiation results in a 2x2 human-agent study. The study presents results largely in line with expectations of human-human negotiation outcomes, and helps to demonstrate the validity and usefulness of the IAGO platform. General Terms Experimentation; Human Factors.

AAMAS Conference 2017 Conference Paper

Incorporating Emotion Perception into Opponent Modeling for Social Dilemmas

  • Rens Hoegen
  • Giota Stratou
  • Jonathan Gratch

Many everyday decisions involve a social dilemma: cooperation can enhance joint gains, but also make one vulnerable to exploitation. Emotion and emotional signaling is an important element of how people resolve these dilemmas. With the rise of affective computing, emotion is also an important element of how people resolve these dilemmas with machines. In this article, we learn a predictive model of how people make decisions in an iterative social dilemma. We further show that model accuracy improves by incorporating a player’s emotional displays as input to this model, and provide some insight into which emotions influence social decisions. Finally, we show how this model can be used to perform “social planning”: i. e. , to generate a sequence of actions and expressions that achieve social goals (such as maximizing individual rewards). These techniques can be used to enhance machine-understanding of human behavior, as social decision-aids, or to drive the actions of virtual and robotic agents. CCS Concepts •Computing methodologies → Machine learning; Modeling and simulation; Agent / discrete models; Cognitive science;

JAAMAS Journal 2017 Journal Article

Social decisions and fairness change when people’s interests are represented by autonomous agents

  • Celso M. de Melo
  • Stacy Marsella
  • Jonathan Gratch

Abstract There has been growing interest on agents that represent people’s interests or act on their behalf such as automated negotiators, self-driving cars, or drones. Even though people will interact often with others via these agent representatives, little is known about whether people’s behavior changes when acting through these agents, when compared to direct interaction with others. Here we show that people’s decisions will change in important ways because of these agents; specifically, we showed that interacting via agents is likely to lead people to behave more fairly, when compared to direct interaction with others. We argue this occurs because programming an agent leads people to adopt a broader perspective, consider the other side’s position, and rely on social norms—such as fairness—to guide their decision making. To support this argument, we present four experiments: in Experiment 1 we show that people made fairer offers in the ultimatum and impunity games when interacting via agent representatives, when compared to direct interaction; in Experiment 2, participants were less likely to accept unfair offers in these games when agent representatives were involved; in Experiment 3, we show that the act of thinking about the decisions ahead of time—i. e. , under the so-called “strategy method”—can also lead to increased fairness, even when no agents are involved; and, finally, in Experiment 4 we show that participants were less likely to reach an agreement with unfair counterparts in a negotiation setting. We discuss theoretical implications for our understanding of the nature of people’s social behavior with agent representatives, as well as practical implications for the design of agents that have the potential to increase fairness in society.

AAMAS Conference 2017 Conference Paper

Towards An Autonomous Agent that Provides Automated Feedback on Students' Negotiation Skills

  • Emmanuel Johnson
  • Jonathan Gratch
  • David DeVault

Although negotiation is an integral part of daily life, most people are unskilled negotiators. To improve one’s skill set, a range of costly options including self-study guides, courses, and training programs are offered by various companies and educational institutions. For those who can’t afford costly training options, virtual role playing agents offer a low-cost alternative. To be effective, these systems must allow students to engage in experiential learning exercises and provide personalized feedback on the learner’s performance. In this paper, we show how a number of negotiation principles can be formalized and quantified. We then establish the pedagogical relevance of several automatic metrics, and show that these metrics are significantly correlated with negotiation outcomes in a human-agent negotiation. This illustrates the realism and helps to validate these principles. It also shows the potential of technology being used to quantify feedback that is traditionally provided through more qualitative approaches. The metrics we describe can provide students with personalized feedback on the errors they make in a negotiation exercise and thereby support guided experiential learning. CCS Concepts •Human-centered computing → Human computer interaction (HCI); •Computing methodologies → Artificial intelligence;

IJCAI Conference 2017 Conference Paper

When Will Negotiation Agents Be Able to Represent Us? The Challenges and Opportunities for Autonomous Negotiators

  • Tim Baarslag
  • Michael Kaisers
  • Enrico H. Gerding
  • Catholijn M. Jonker
  • Jonathan Gratch

Computers that negotiate on our behalf hold great promise for the future and will even become indispensable in emerging application domains such as the smart grid and the Internet of Things. Much research has thus been expended to create agents that are able to negotiate in an abundance of circumstances. However, up until now, truly autonomous negotiators have rarely been deployed in real-world applications. This paper sizes up current negotiating agents and explores a number of technological, societal and ethical challenges that autonomous negotiation systems have brought about. The questions we address are: in what sense are these systems autonomous, what has been holding back their further proliferation, and is their spread something we should encourage? We relate the automated negotiation research agenda to dimensions of autonomy and distill three major themes that we believe will propel autonomous negotiation forward: accurate representation, long-term perspective, and user trust. We argue these orthogonal research directions need to be aligned and advanced in unison to sustain tangible progress in the field.

AAMAS Conference 2016 Conference Paper

"Do as I Say, Not as I Do: " Challenges in Delegating Decisions to Automated Agents

  • Celso M. de Melo
  • Stacy Marsella
  • Jonathan Gratch

There has been growing interest, across various domains, in computer agents that can decide on behalf of humans. These agents have the potential to save considerable time and help humans reach better decisions. One implicit assumption, however, is that, as long as the algorithms that simulate decision-making are correct and capture how humans make decisions, humans will treat these agents similarly to other humans. Here we show that interaction with agents that act on our behalf or on behalf of others is richer and more interesting than initially expected. Our results show that, on the one hand, people are more selfish with agents acting on behalf of others, than when interacting directly with others. We propose that agents increase the social distance with others which, subsequently, leads to increased demand. On the other hand, when people task an agent to interact with others, people show more concern for fairness than when interacting directly with others. In this case, higher psychological distance leads people to consider their social image and the long-term consequences of their actions and, thus, behave more fairly. To support these findings, we present an experiment where people engaged in the ultimatum game, either directly or via an agent, with others or agents representing others. We show that these patterns of behavior also occur in a variant of the ultimatum game – the impunity game – where others have minimal power over the final outcome. Finally, we study how social value orientation – i. e. , people’s propensity for cooperation – impact these effects. These results have important implications for our understanding of the psychological mechanisms underlying interaction with agents, as well as practical implications for the design of successful agents that act on our behalf or on behalf of others. General Terms Experimentation, Economics, Human Factors, Theory.

IJCAI Conference 2016 Conference Paper

Predictive Models of Malicious Behavior in Human Negotiations

  • Zahra Nazari
  • Jonathan Gratch

Human and artificial negotiators must exchange information to find efficient negotiated agreements, but malicious actors could use deception to gain unfair advantage. The misrepresentation game is a game-theoretic formulation of how deceptive actors could gain disproportionate rewards while seeming honest and fair. Previous research proposed a solution to this game but this required restrictive assumptions that might render it inapplicable to real-world settings. Here we evaluate the formalism against a large corpus of human face-to-face negotiations. We confirm that the model captures how dishonest human negotiators win while seeming fair, even in unstructured negotiations. We also show that deceptive negotiators give-off signals of their malicious behavior, providing the opportunity for algorithms to detect and defeat this malicious tactic.

AAAI Conference 2015 Conference Paper

SimSensei Demonstration: A Perceptive Virtual Human Interviewer for Healthcare Applications

  • Louis-Philippe Morency
  • Giota Stratou
  • David DeVault
  • Arno Hartholt
  • Margo Lhommet
  • Gale Lucas
  • Fabrizio Morbini
  • Kallirroi Georgila

We present the SimSensei system, a fully automatic virtual agent that conducts interviews to assess indicators of psychological distress. We emphasize on the perception part of the system, a multimodal framework which captures and analyzes user state for both behavioral understanding and interactional purposes.

AAAI Conference 2014 Conference Paper

The Importance of Cognition and Affect for Artificially Intelligent Decision Makers

  • Celso de Melo
  • Jonathan Gratch
  • Peter Carnevale

Agency – the capacity to plan and act – and experience – the capacity to sense and feel – are two critical aspects that determine whether people will perceive non-human entities, such as autonomous agents, to have a mind. There is evidence that the absence of either can reduce cooperation. We present an experiment that tests the necessity of both for cooperation with agents. In this experiment we manipulated people’s perceptions about the cognitive and affective abilities of agents, when engaging in the ultimatum game. The results indicated that people offered more money to agents that were perceived to make decisions according to their intentions (high agency), rather than randomly (low agency). Additionally, the results showed that people offered more money to agents that expressed emotion (high experience), when compared to agents that did not (low experience). We discuss the implications of this agencyexperience theoretical framework for the design of artificially intelligent decision makers.

AAMAS Conference 2012 Conference Paper

Bayesian Model of the Social Effects of Emotion in Decision-Making in Multiagent Systems

  • Celso de Melo
  • Peter Carnevale
  • Stephen Read
  • Dimitrios Antos
  • Jonathan Gratch

Research in the behavioral sciences suggests that emotion can serve important social functions and that, more than a simple manifestation of internal experience, emotion displays communicate one's beliefs, desires and intentions. In a recent study we have shown that, when engaged in the iterated prisoner's dilemma with agents that display emotion, people infer, from the emotion displays, how the agent is appraising the ongoing interaction (e. g. , is the situation favorable to the agent? Does it blame me for the current state-of-affairs? ). From these appraisals people, then, infer whether the agent is likely to cooperate in the future. In this paper we propose a Bayesian model that captures this social function of emotion. The model supports probabilistic predictions, from emotion displays, about how the counterpart is appraising the interaction which, in turn, lead to predictions about the counterpart's intentions. The model's parameters were learned using data from the empirical study. Our evaluation indicated that considering emotion displays improved the model's ability to predict the counterpart's intentions, in particular, how likely it was to cooperate in a social dilemma. Using data from another empirical study where people made inferences about the counterpart's likelihood of cooperation in the absence of emotion displays, we also showed that the model could, from information about appraisals alone, make appropriate inferences about the counterpart's intentions. Overall, the paper suggests that appraisals are valuable for computational models of emotion interpretation. The relevance of these results for the design of multiagent systems where agents, human or not, can convey or recognize emotion is discussed.

AAMAS Conference 2012 Conference Paper

Towards building a Virtual Counselor: Modeling Nonverbal Behavior during Intimate Self-Disclosure

  • Sin-Hwa Kang
  • Jonathan Gratch
  • Candy Sidner
  • Ron Artstein
  • Lixing Huang
  • Louis-Phillippe Morency

Nonverbal behavior is considered critical for indicating intimacy and is important when designing a social virtual agent such as a counselor. One key research question is how properly to express intimate self-disclosure. In this paper we present an extensive study of human nonverbal behavior during intimate self-disclosure. This is an important milestone in creating a virtual counselor. A study of video interactions between human participants demonstrated that people display more head tilts and pauses when they revealed highly intimate information about themselves; they presented more head nods and eye gazes during less intimate sharing. An implementation of these behaviors in a virtual agent suggests that people tend to perceive head tilts, pauses and gaze aversion by the agent as conveying intimate self-disclosure. These findings are important for future research with virtual counselors and other social agents.

AAMAS Conference 2011 Conference Paper

A Multimodal End-of-Turn Prediction Model: Learning from Parasocial Consensus Sampling

  • Lixing Huang
  • Louis-Philippe Morency
  • Jonathan Gratch

Virtual human, with realistic behaviors and social skills, evoke in users a range of social behaviors normally only seen in human face-to-face interactions. One of the key challenges in creating such virtual humans is to give them human-like conversational skills, such as turn-taking skill. In this paper, we propose a multimodal end-of-turn prediction model. Instead of recording face-to-face conversation data, we collect the turn-taking data using Parasocial Consensus Sampling (PCS) framework. Then we analyze the relationship between verbal and nonverbal features and turn-taking behaviors based on the consensus data and show how these features influence the time people use to take turns. Finally, we present a probabilistic multimodal end-ofturn prediction model, which enables virtual humans to make real-time turn-taking predictions. The result shows that our model achieves a higher accuracy than previous methods did.

IS Journal 2011 Journal Article

Social and Economic Computing

  • Wenji Mao
  • Alexander Tuzhilin
  • Jonathan Gratch

Social and economic computing is a cross-disciplinary field focusing on the development of computing technologies that consider social and economic contexts. Social computing and economic computing not only share a number of computing technologies, they also benefit and fertilize each other in computational theories, models, and design. This special issue presents some representative research in social and economic computing from several perspectives.

AAMAS Conference 2011 Conference Paper

The Effect of Expression of Anger and Happiness in Computer Agents on Negotiations with Humans

  • Celso M. de Melo
  • Peter Carnevale
  • Jonathan Gratch

There is now considerable evidence in social psychology, economics, and related disciplines that emotion plays an important role in negotiation. For example, humans make greater concessions in negotiation to an opposing human who expresses anger, and they make fewer concessions to an opponent who expresses happiness, compared to a no-emotion-expression control. However, in AI, despite the wide interest in negotiation as a means to resolve differences between agents and humans, emotion has been largely ignored. This paper explores whether expression of anger or happiness by computer agents, in a multi-issue negotiation task, can produce effects that resemble effects seen in human-human negotiation. The paper presents an experiment where participants play with agents that express emotions (anger vs. happiness vs. control) through different modalities (text vs. facial displays). An important distinction in our experiment is that participants are aware that they negotiate with computer agents. The data indicate that the emotion effects observed in past work with humans also occur in agent-human negotiation, and occur independently of modality of expression. The implications of these results are discussed for the fields of automated negotiation, intelligent virtual agents and artificial intelligence.

AAAI Conference 2011 Conference Paper

The Influence of Emotion Expression on Perceptions of Trustworthiness in Negotiation

  • Dimitrios Antos
  • Celso de Melo
  • Jonathan Gratch
  • Barbara Grosz

When interacting with computer agents, people make inferences about various characteristics of these agents, such as their reliability and trustworthiness. These perceptions are significant, as they influence people’s behavior towards the agents, and may foster or inhibit repeated interactions between them. In this paper we investigate whether computer agents can use the expression of emotion to influence human perceptions of trustworthiness. In particular, we study human-computer interactions within the context of a negotiation game, in which players make alternating offers to decide on how to divide a set of resources. A series of negotiation games between a human and several agents is then followed by a “trust game. ” In this game people have to choose one among several agents to interact with, as well as how much of their resources they will trust to it. Our results indicate that, among those agents that displayed emotion, those whose expression was in accord with their actions (strategy) during the negotiation game were generally preferred as partners in the trust game over those whose emotion expressions and actions did not mesh. Moreover, we observed that when emotion does not carry useful new information, it fails to strongly influence human decision-making behavior in a negotiation setting.

AAMAS Conference 2010 Conference Paper

A data-driven approach to model Culture-specific Communication Management Styles for Virtual Agents

  • Birgit Endrass
  • Lixing Huang
  • Elisabeth Andre
  • Jonathan Gratch

Virtual agents are a great opportunity in teaching inter-cultural competencies. Advantages, such as the repeatability of training sessions, emotional distance to virtual characters, the opportunity to over-exaggerate or generalize behavior or simply to save the costs for human training-partners support that idea. Especially the way communication is coordinated varies across cultures. In this paper, we present our approach of simulating differences in the management of communication for the American and Arabic cultures. Therefore, we give an overview of behavioral tendencies described in the literature, pointing out differences between the two cultures. Grounding our expectations in empirical data we analyzed a multi-modal corpora. These findings were integrated into a demonstrator using virtual agents and evaluated in a preliminary study.

AAMAS Conference 2010 Conference Paper

Parasocial Consensus Sampling: Combining Multiple Perspectives to Learn Virtual Human Behavior

  • Lixing Huang
  • Louis-Phillippe Morency
  • Jonathan Gratch

Virtual humans are embodied software agents that should not onlybe realistic looking but also have natural and realistic behaviors. Traditional virtual human systems learn these interactionbehaviors by observing how individuals respond in face-to-facesituations (i. e. , direct interaction). In contrast, this paperintroduces a novel methodological approach called parasocialconsensus sampling (PCS) which allows multiple individuals tovicariously experience the same situation to gain insight on thetypical (i. e. , consensus view) of human responses in socialinteraction. This approach can help tease apart what isidiosyncratic from what is essential and help reveal the strength ofcues that elicit social responses. Our PCS approach has severaladvantages over traditional methods: (1) it integrates data frommultiple independent listeners interacting with the same speaker, (2) it associates probability of how likely feedback will be givenover time, (3) it can be used as a prior to analyze and understandthe face-to-face interaction data, (4) it facilitates much quickerand cheaper data collection. In this paper, we apply our PCSapproach to learn a predictive model of listener backchannelfeedback. Our experiments demonstrate that a virtual humandriven by our PCS approach creates significantly more rapportand is perceived as more believable than the virtual human drivenby face-to-face interaction data.

JAAMAS Journal 2009 Journal Article

A probabilistic multimodal approach for predicting listener backchannels

  • Louis-Philippe Morency
  • Iwan de Kok
  • Jonathan Gratch

Abstract During face-to-face interactions, listeners use backchannel feedback such as head nods as a signal to the speaker that the communication is working and that they should continue speaking. Predicting these backchannel opportunities is an important milestone for building engaging and natural virtual humans. In this paper we show how sequential probabilistic models (e. g. , Hidden Markov Model or Conditional Random Fields) can automatically learn from a database of human-to-human interactions to predict listener backchannels using the speaker multimodal output features (e. g. , prosody, spoken words and eye gaze). The main challenges addressed in this paper are automatic selection of the relevant features and optimal feature representation for probabilistic models. For prediction of visual backchannel cues (i. e. , head nods), our prediction model shows a statistically significant improvement over a previously published approach based on hand-crafted rules.

AAMAS Conference 2008 Conference Paper

Does the Contingency of Agents' Nonverbal Feedback Affect Users' Social Anxiety?

  • Sin-Hwa Kang
  • Jonathan Gratch
  • Ning Wang
  • James Watt

We explored the association between users’ social anxiety and the interactional fidelity of an agent (also referred to as a virtual human), specifically addressing whether the contingency of agents’ nonverbal feedback affects the relationship between users’ social anxiety and their feelings of rapport, performance, or judgment on interaction partners. This subject was examined across four experimental conditions where participants interacted with three different types of agents and a real human. The three types of agents included the Non-Contingent Agent, the Responsive Agent (opposite to the Non-Contingent Agent), and the Mediated Agent (controlled by a real human). The results indicated that people having greater social anxiety would feel less rapport and show worse performance while feeling more embarrassment if they experience the untimely feedback of the Non-Contingent Agent. The results also showed people having more anxiety would trust real humans less as their interaction partners. We discuss the implication of this relationship between social anxiety in a human subject and the interactional fidelity of an agent on the design of virtual characters for social skills training and therapy.

AAAI Conference 2007 System Paper

The More the Merrier: Multi-Party Negotiation with Virtual Humans

  • Patrick Kenny
  • Jonathan Gratch
  • Stacy Marsella

The goal of the Virtual Humans Project at the University of Southern California’s Institute for Creative Technologies is to enrich virtual training environments with virtual humans – autonomous agents that support face-to-face interaction with trainees in a variety of roles – through bringing together many different areas of research including speech recognition, natural language understanding, dialogue management, cognitive modeling, emotion modeling, nonverbal behavior and speech and knowledge management. The demo at AAAI will focus on our work using virtual humans to train negotiation skills. Conference attendees will negotiate with a virtual human doctor and elder to try to move a clinic out of harm’s way in single and multi-party negotiation scenarios using the latest iteration of our Virtual Humans framework. The user will use natural speech to talk to the embodied agents, who will respond in accordance with their internal task model and state. The characters will carry out a multi-party dialogue with verbal and non-verbal behavior. A video of a single-party version of the scenario was shown at AAAI-06. This new interactive demo introduces several new features, including multi-party negotiation, dynamically generated non-verbal behavior and a central ontology.

JAAMAS Journal 2005 Journal Article

Evaluating a Computational Model of Emotion

  • Jonathan Gratch
  • Stacy Marsella

Abstract Spurred by a range of potential applications, there has been a growing body of research in computational models of human emotion. To advance the development of these models, it is critical that we evaluate them against the phenomena they purport to model. In this paper, we present one method to evaluate an emotion model that compares the behavior of the model against human behavior using a standard clinical instrument for assessing human emotion and coping. We use this method to evaluate the Emotion and Adaptation (EMA) model of emotion Gratch and Marsella. The evaluation highlights strengths of the approach and identifies where the model needs further development.

AAAI Conference 1996 Conference Paper

Sequential Inductive Learning

  • Jonathan Gratch

This article advocates a new model for inductive learning. Called sequential induction, it helps bridge classical fixed-sample learning techniques (which are efficient but difficult to formally characterize), and worst-case approaches (which provide strong statistical guarantees but are too inefficient for practical use). Learning proceeds as a sequence of decisions which are informed by training data. By analyzing induction at the level of these decisions, and by utilizing the only enough data to make each decision, sequential induction provides statistical guarantees but with substantially less data than worst-case methods require. The sequential inductive model is also useful as a method for determining a sufficient sample size for inductive learning and as such, is relevant to learning problems where the preponderance of data or the cost of gathering data precludes the use of traditional methods.

AAAI Conference 1994 Conference Paper

Improving Learning Performance through Rational Resource Allocation

  • Jonathan Gratch

This article shows how rational analysis can be used to minimize learning cost for a general class of statistical learning problems. We discuss the factors that influence learning cost and show that the problem of efficient learning can be cast as a resource optimization problem. Solutions found in this way can be significantly more efficient than the best solutions that do not account for these factors. We introduce a heuristic learning algorithm that approximately solves this optimization problem and document its performance improvements on synthetic and real-world problems.

ICAPS Conference 1994 Conference Paper

Producing Satisficing Solutions to Scheduling Problems: An Interactive Constraint Relaxation Approach

  • Steve A. Chien
  • Jonathan Gratch

One drawbackto using constraint-propagntion in planning and scheduling systems is that whena problemhas an unsatisfiable sets of constraints such algorithmstypically only showthat no solution exists. While, technically correct, in practical situations, it is desirable in these cases to producea satisficing solution that satisfies the mostimportant constraints (typically defined in terms of maximizinga utility function). This paper describes an iterative constraint relaxation approach in which the scheduler uses heuristics to progressively relax problem constraints until the problem becomessatisfiable. We present empirical results of applyingthese techniquesto the problem of scheduling spacecraft communications for JPL/NASA antenna resources.

AAAI Conference 1992 Conference Paper

COMPOSER: A Probabilistic Solution to the Utility Problem in Speed-Up Learning

  • Jonathan Gratch

In machine learning there is considerable interest in techniques which improve planning ability. Initial investigations have identified a wide variety of techniques to address this issue. Progress has been hampered by the utility problem, a basic tradeoff between the benefit of learned knowledge and the cost to locate and apply relevant knowledge. In this paper we describe the COMPOSER system which embodies a probabilistic solution to the utility problem. We outline the statistical foundations of our approach and compare it against four other approaches which appear in the literature.