Arrow Research search

Author name cluster

Casey Kennington

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
2 author rows

Possible papers

2

IROS Conference 2025 Conference Paper

Recognizing and Generating Novel Emotional Behaviors on Two Robotic Platforms

  • Rista Baral
  • Bethany Grenz
  • Casey Kennington

Recent advancements in language modeling have enabled robots to more easily generate complex behaviors. However, ensuring that the generated behaviors align with the intended emotional states of the robot is necessary in many domains where robots are used. In this paper, we present an adversarial-like training regime in which a generative model of emotional behavior is enhanced through feedback from both an emotion discriminator and a novelty loss, to ensure that the generated behaviors are non-redundant. Our generative model, fine-tuned on a dataset of robot behaviors labeled with emotions, generates behavior sequences perceived as reflecting the emotional qualities of the input emotion labels. Through our training regime, the generative model is refined by minimizing the discrepancies in both emotion classification and behavioral novelty. We evaluated our approach through multiple experiments and human evaluations, where participants were asked to appraise the emotions conveyed by robot behaviors and rate the novelty of the behaviors. Experimental results demonstrate that our two models, one for classifying and one for generating emotional behaviors, are effective, with the generative model producing emotionally rich behaviors that differ from previously generated outputs.

AAAI Conference 2018 Conference Paper

Placing Objects in Gesture Space: Toward Incremental Interpretation of Multimodal Spatial Descriptions

  • Ting Han
  • Casey Kennington
  • David Schlangen

When describing routes not in the current environment, a common strategy is to anchor the description in configurations of salient landmarks, complementing the verbal descriptions by “placing” the non-visible landmarks in the gesture space. Understanding such multimodal descriptions and later locating the landmarks from real world is a challenging task for the hearer, who must interpret speech and gestures in parallel, fuse information from both modalities, build a mental representation of the description, and ground the knowledge to real world landmarks. In this paper, we model the hearer’s task, using a multimodal spatial description corpus we collected. To reduce the variability of verbal descriptions, we simplified the setup to use simple objects as landmarks. We describe a real-time system to evaluate the separate and joint contributions of the modalities. We show that gestures not only help to improve the overall system performance, even if to a large extent they encode redundant information, but also result in earlier final correct interpretations. Being able to build and apply representations incrementally will be of use in more dialogical settings, we argue, where it can enable immediate clarification in cases of mismatch.