Arrow Research search

Author name cluster

Bradford Mott

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

6 papers
1 author row

Possible papers

6

IJCAI Conference 2018 Conference Paper

High-Fidelity Simulated Players for Interactive Narrative Planning

  • Pengcheng Wang
  • Jonathan Rowe
  • Wookhee Min
  • Bradford Mott
  • James Lester

Interactive narrative planning offers significant potential for creating adaptive gameplay experiences. While data-driven techniques have been devised that utilize player interaction data to induce policies for interactive narrative planners, they require enormously large gameplay datasets. A promising approach to addressing this challenge is creating simulated players whose behaviors closely approximate those of human players. In this paper, we propose a novel approach to generating high-fidelity simulated players based on deep recurrent highway networks and deep convolutional networks. Empirical results demonstrate that the proposed models significantly outperform the prior state-of-the-art in generating high-fidelity simulated player models that accurately imitate human players’ narrative interactions. Using the high-fidelity simulated player models, we show the advantage of more exploratory reinforcement learning methods for deriving generalizable narrative adaptation policies.

IJCAI Conference 2017 Conference Paper

Interactive Narrative Personalization with Deep Reinforcement Learning

  • Pengcheng Wang
  • Jonathan Rowe
  • Wookhee Min
  • Bradford Mott
  • James Lester

Data-driven techniques for interactive narrative generation are the subject of growing interest. Reinforcement learning (RL) offers significant potential for devising data-driven interactive narrative generators that tailor players’ story experiences by inducing policies from player interaction logs. A key open question in RL-based interactive narrative generation is how to model complex player interaction patterns to learn effective policies. In this paper we present a deep RL-based interactive narrative generation framework that leverages synthetic data produced by a bipartite simulated player model. Specifically, the framework involves training a set of Q-networks to control adaptable narrative event sequences with long short-term memory network-based simulated players. We investigate the deep RL framework’s performance with an educational interactive narrative, Crystal Island. Results suggest that the deep RL-based narrative generation framework yields effective personalized interactive narratives.

IJCAI Conference 2016 Conference Paper

Player Goal Recognition in Open-World Digital Games with Long Short-Term Memory Networks

  • Wookhee Min
  • Bradford Mott
  • Jonathan Rowe
  • Barry Liu
  • James Lester

Recent years have seen a growing interest in player modeling for digital games. Goal recognition, which aims to accurately recognize players' goals from observations of low-level player actions, is a key problem in player modeling. However, player goal recognition poses significant challenges because of the inherent complexity and uncertainty pervading gameplay. In this paper, we formulate player goal recognition as a sequence labeling task and introduce a goal recognition framework based on long short-term memory (LSTM) networks. Results show that LSTM-based goal recognition is significantly more accurate than previous state-of-the-art methods, including n-gram encoded feedforward neural networks pre-trained with stacked denoising autoencoders, as well as Markov logic network-based models. Because of increased goal recognition accuracy and the elimination of labor-intensive feature engineering, LSTM-based goal recognition provides an effective solution to a central problem in player modeling for open-world digital games.

AAAI Conference 2012 Conference Paper

Goal Recognition with Markov Logic Networks for Player-Adaptive Games

  • Eun Ha
  • Jonathan Rowe
  • Bradford Mott
  • James Lester

Goal recognition in digital games involves inferring players’ goals from observed sequences of low level player actions. Goal recognition models support player adaptive digital games, which dynamically augment game events in response to player choices for a range of applications, including entertainment, training, and education. However, digital games pose significant challenges for goal recognition, such as exploratory actions and ill defined goals. This paper presents a goal recognition framework based on Markov logic networks (MLNs). The model’s parameters are directly learned from a corpus that was collected from player interactions with a non linear educational game. An empirical evaluation demonstrates that the MLN goal recognition framework accurately predicts players’ goals in a game environment with exploratory actions and ill defined goals.

AAAI Conference 2006 Conference Paper

Probabilistic Goal Recognition in Interactive Narrative Environments

  • Bradford Mott

Recent years have witnessed a growing interest in interactive narrative-centered virtual environments for education, training, and entertainment. Narrative environments dynamically craft engaging story-based experiences for users, who are themselves active participants in unfolding stories. A key challenge posed by interactive narrative is recognizing users’ goals so that narrative planners can dynamically orchestrate plot elements and character actions to create rich, customized stories. In this paper we present an inductive approach to predicting users’ goals by learning probabilistic goal recognition models. This approach has been evaluated in a narrative environment for the domain of microbiology in which the user plays the role of a medical detective solving a science mystery. An empirical evaluation of goal recognition based on n-gram models and Bayesian networks suggests that the models offer significant predictive power.