Arrow Research search

Author name cluster

Tim Miller

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

35 papers
1 author row

Possible papers

35

AAMAS Conference 2025 Conference Paper

A Hypothesis-Driven Approach to Explainable Goal Recognition

  • Abeer Alshehri
  • Hissah Alotaibi
  • Tim Miller
  • Mor Vered

In this paper, we introduce an explainable goal-recognition (XGR) approach for decision support that instantiates the evaluative AI paradigm. Current explainable AI (XAI) approaches focus on providing recommendations and justifying those recommendations. However, a shift toward evaluative AI has been proposed, focusing on generating evidence to support or refute human judgments and explaining trade-offs among hypotheses, rather than merely justifying AI recommendations. We introduce such a method for goal recognition tasks by leveraging the Weight of Evidence (WoE) framework. Through a human study in a maritime surveillance task, we demonstrate that our model improves decision accuracy, efficiency, and reliance in complex scenarios, outperforming two baseline models and demonstrating its potential in real-world decision-making.

PRL Workshop 2025 Workshop Paper

Exploring Explainable Multi-player MCTS-minimax Hybrids in Board Game Using Process Mining

  • Yiyu Qian
  • Tim Miller
  • Liyuan Zhao

Monte-Carlo Tree Search (MCTS) is a family of samplingbased search algorithms widely used for online planning in sequential decision-making domains and at the heart of many recent advances in artificial intelligence. Understanding the behavior of MCTS agents is difficult for developers and users due to the frequently large and complex search trees that result from the simulation of many possible futures, their evaluations, and their relationships. This paper presents our ongoing investigation into potential explanations for the decisionmaking and behavior of MCTS. A weakness of MCTS is that it constructs a highly selective tree and, as a result, can miss crucial moves and fall into tactical traps. Full-width minimax search constitutes the solution. We integrate shallow minimax search into the rollout phase of multi-player MCTS and use process mining technique to explain agents’ strategies in 3v3 checkers.

JAIR Journal 2025 Journal Article

Towards Explainable Goal Recognition Using Weight of Evidence (WoE): A Human-Centered Approach

  • Abeer Alshehri
  • Amal Abdulrahman
  • Hajar Alamri
  • Tim Miller
  • Mor Vered

Goal recognition (GR) involves inferring an agent's unobserved goal from a sequence of observations. This is a critical problem in AI with diverse applications. Traditionally, GR has been addressed using 'inference to the best explanation' or abduction, where hypotheses about the agent's goals are generated as the most plausible explanations for observed behavior. Alternatively, some approaches enhance interpretability by ensuring that an agent's behavior aligns with an observer's expectations or by making the reasoning behind decisions more transparent. In this work, we tackle a different challenge: explaining the GR process in a way that is comprehensible to humans. We introduce and evaluate an explainable model for goal recognition (GR) agents, grounded in the theoretical framework and cognitive processes underlying human behavior explanation. Drawing on insights from two human-agent studies, we propose a conceptual framework for human-centered explanations of GR. Using this framework, we develop the eXplainable Goal Recognition (XGR) model, which generates explanations for both why and why not questions. We evaluate the model computationally across eight GR benchmarks and through three user studies. The first study assesses the efficiency of generating human-like explanations within the Sokoban game domain, the second examines perceived explainability in the same domain, and the third evaluates the model's effectiveness in aiding decision-making in illegal fishing detection. Results demonstrate that the XGR model significantly enhances user understanding, trust, and decision-making compared to baseline models, underscoring its potential to improve human-agent collaboration.

AAAI Conference 2023 Conference Paper

Explaining Model Confidence Using Counterfactuals

  • Thao Le
  • Tim Miller
  • Ronal Singh
  • Liz Sonenberg

Displaying confidence scores in human-AI interaction has been shown to help build trust between humans and AI systems. However, most existing research uses only the confidence score as a form of communication. As confidence scores are just another model output, users may want to understand why the algorithm is confident to determine whether to accept the confidence score. In this paper, we show that counterfactual explanations of confidence scores help study participants to better understand and better trust a machine learning model's prediction. We present two methods for understanding model confidence using counterfactual explanation: (1) based on counterfactual examples; and (2) based on visualisation of the counterfactual space. Both increase understanding and trust for study participants over a baseline of no explanation, but qualitative results show that they are used quite differently, leading to recommendations of when to use each one and directions of designing better explanations.

AIJ Journal 2023 Journal Article

The effects of explanations on automation bias

  • Mor Vered
  • Tali Livni
  • Piers Douglas Lionel Howe
  • Tim Miller
  • Liz Sonenberg

In this paper we explore the effect of explanations on reducing errors in the human decision making process caused by placing excessive reliance on automated decision support systems. We develop and implement different forms of explanations based on cognitive principles and evaluate their effect over two different domains: our new version of the Coloured Trails game, and over a simulated radiological task. We found that explanations did not reduce this aspect of automation bias and sometimes increased it. However, they reduced completion time and often increased user decision accuracy, despite not altering the perceived task load. Overall, explanations were beneficial though the benefits were highly context dependent. This work contributes to the complex interplay between automation bias, performance and explanations.

AIJ Journal 2022 Journal Article

Efficient multi-agent epistemic planning: Teaching planners about nested belief

  • Christian Muise
  • Vaishak Belle
  • Paolo Felli
  • Sheila McIlraith
  • Tim Miller
  • Adrian R. Pearce
  • Liz Sonenberg

Many AI applications involve the interaction of multiple autonomous agents, requiring those agents to reason about their own beliefs, as well as those of other agents. However, planning involving nested beliefs is known to be computationally challenging. In this work, we address the task of synthesizing plans that necessitate reasoning about the beliefs of other agents. We plan from the perspective of a single agent with the potential for goals and actions that involve nested beliefs, non-homogeneous agents, co-present observations, and the ability for one agent to reason as if it were another. We formally characterize our notion of planning with nested belief, and subsequently demonstrate how to automatically convert such problems into problems that appeal to classical planning technology for solving efficiently. Our approach represents an important step towards applying the well-established field of automated planning to the challenging task of planning involving nested beliefs of multiple agents.

JAIR Journal 2022 Journal Article

Planning with Perspectives -- Decomposing Epistemic Planning using Functional STRIPS

  • Guang Hu
  • Tim Miller
  • Nir Lipovetzky

In this paper, we present a novel approach to epistemic planning called planning with perspectives (PWP) that is both more expressive and computationally more efficient than existing state-of-the-art epistemic planning tools. Epistemic planning — planning with knowledge and belief — is essential in many multi-agent and human-agent interaction domains. Most state-of-the-art epistemic planners solve epistemic planning problems by either compiling to propositional classical planning (for example, generating all possible knowledge atoms or compiling epistemic formulae to normal forms); or explicitly encoding Kripke-based semantics. However, these methods become computationally infeasible as problem sizes grow. In this paper, we decompose epistemic planning by delegating reasoning about epistemic formulae to an external solver. We do this by modelling the problem using Functional STRIPS, which is more expressive than standard STRIPS and supports the use of external, black-box functions within action models. Building on recent work that demonstrates the relationship between what an agent ‘sees’ and what it knows, we define the perspective of each agent using an external function, and build a solver for epistemic logic around this. Modellers can customise the perspective function of agents, allowing new epistemic logics to be defined without changing the planner. We ran evaluations on well-known epistemic planning benchmarks to compare an existing state-of-the-art planner, and on new scenarios that demonstrate the expressiveness of the PWP approach. The results show that our PWP planner scales significantly better than the state-of-the-art planner that we compared against, and can express problems more succinctly.

KER Journal 2021 Journal Article

Contrastive explanation: a structural-model approach

  • Tim Miller

Abstract This paper presents a model of contrastive explanation using structural casual models. The topic of causal explanation in artificial intelligence has gathered interest in recent years as researchers and practitioners aim to increase trust and understanding of intelligent decision-making. While different sub-fields of artificial intelligence have looked into this problem with a sub-field-specific view, there are few models that aim to capture explanation more generally. One general model is based on structural causal models. It defines an explanation as a fact that, if found to be true, would constitute an actual cause of a specific event. However, research in philosophy and social sciences shows that explanations are contrastive: that is, when people ask for an explanation of an event—the fact —they (sometimes implicitly) are asking for an explanation relative to some contrast case; that is, ‘Why P rather than Q?’. In this paper, we extend the structural causal model approach to define two complementary notions of contrastive explanation, and demonstrate them on two classical problems in artificial intelligence: classification and planning. We believe that this model can help researchers in subfields of artificial intelligence to better understand contrastive explanation.

AAMAS Conference 2021 Conference Paper

Deceptive Reinforcement Learning for Privacy-Preserving Planning

  • Zhengshang Liu
  • Yue Yang
  • Tim Miller
  • Peta Masters

In this paper, we study the problem of deceptive reinforcement learning to preserve the privacy of a reward function. Reinforcement learning is the problem of finding a behaviour policy based on rewards received from exploratory behaviour. A key ingredient in reinforcement learning is a reward function, which determines how much reward (negative or positive) is given and when. However, in some situations, we may want to keep a reward function private; that is, to make it difficult for an observer to determine the reward function used. We define the problem of privacy-preserving reinforcement learning, and present two models for solving it. These models are based on dissimulation – a form of deception that ‘hides the truth’. We evaluate our models both computationally and via human behavioural experiments. Results show that the resulting policies are indeed deceptive, and that participants can determine the true reward function less reliably than that of an honest agent.

JAIR Journal 2021 Journal Article

Goal Recognition for Deceptive Human Agents through Planning and Gaze

  • Thao Le
  • Ronal Singh
  • Tim Miller

Eye gaze has the potential to provide insight into the minds of individuals, and this idea has been used in prior research to improve human goal recognition by combining human's actions and gaze. However, most existing research assumes that people are rational and honest. In adversarial scenarios, people may deliberately alter their actions and gaze, which presents a challenge to goal recognition systems. In this paper, we present new models for goal recognition under deception using a combination of gaze behaviour and observed movements of the agent. These models aim to detect when a person is deceiving by analysing their gaze patterns and use this information to adjust the goal recognition. We evaluated our models in two human-subject studies: (1) using data collected from 30 individuals playing a navigation game inspired by an existing deception study and (2) using data collected from 40 individuals playing a competitive game (Ticket To Ride). We found that one of our models (Modulated Deception Gaze+Ontic) offers promising results compared to the previous state-of-the-art model in both studies. Our work complements existing adversarial goal recognition systems by equipping these systems with the ability to tackle ambiguous gaze behaviours.

AAAI Conference 2021 Conference Paper

Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors

  • Ruihan Zhang
  • Prashan Madumal
  • Tim Miller
  • Krista A. Ehinger
  • Benjamin I. P. Rubinstein

Convolutional neural network (CNN) models for computer vision are powerful but lack explainability in their most basic form. This deficiency remains a key challenge when applying CNNs in important domains. Recent work on explanations through feature importance of approximate linear models has moved from input-level features (pixels or segments) to features from mid-layer feature maps in the form of concept activation vectors (CAVs). CAVs contain concept-level information and could be learned via clustering. In this work, we rethink the ACE algorithm of Ghorbani et al. , proposing an alternative invertible concept-based explanation (ICE) framework to overcome its shortcomings. Based on the requirements of fidelity (approximate models to target models) and interpretability (being meaningful to people), we design measurements and evaluate a range of matrix factorization methods with our framework. We find that non-negative concept activation vectors (NCAVs) from non-negative matrix factorization provide superior performance in interpretability and fidelity based on computational and human subject experiments. Our framework provides both local and global concept-level explanations for pre-trained CNN models.

AIJ Journal 2020 Journal Article

Combining gaze and AI planning for online human intention recognition

  • Ronal Singh
  • Tim Miller
  • Joshua Newn
  • Eduardo Velloso
  • Frank Vetere
  • Liz Sonenberg

Intention recognition is the process of using behavioural cues, such as deliberative actions, eye gaze, and gestures, to infer an agent's goals or future behaviour. In artificial intelligence, one approach for intention recognition is to use a model of possible behaviour to rate intentions as more likely if they are a better ‘fit’ to actions observed so far. In this paper, we draw from literature linking gaze and visual attention, and we propose a novel model of online human intention recognition that combines gaze and model-based AI planning to build probability distributions over a set of possible intentions. In human-behavioural experiments ( n = 40 ) involving a multi-player board game, we demonstrate that adding gaze-based priors to model-based intention recognition improved the accuracy of intention recognition by 22% ( p < 0. 05 ), determined those intentions ≈90 seconds earlier ( p < 0. 05 ), and at no additional computational cost. We also demonstrate that, when evaluated in the presence of semi-rational or deceptive gaze behaviours, the proposed model is significantly more accurate (9% improvement) ( p < 0. 05 ) compared to a model-based or gaze only approaches. Our results indicate that the proposed model could be used to design novel human-agent interactions in cases when we are unsure whether a person is honest, deceitful, or semi-rational.

AAAI Conference 2020 Conference Paper

Explainable Reinforcement Learning through a Causal Lens

  • Prashan Madumal
  • Tim Miller
  • Liz Sonenberg
  • Frank Vetere

Prominent theories in cognitive science propose that humans understand and represent the knowledge of the world through causal relationships. In making sense of the world, we build causal models in our mind to encode cause-effect relations of events and use these to explain why new events happen by referring to counterfactuals — things that did not happen. In this paper, we use causal models to derive causal explanations of the behaviour of model-free reinforcement learning agents. We present an approach that learns a structural causal model during reinforcement learning and encodes causal relationships between variables of interest. This model is then used to generate explanations of behaviour based on counterfactual analysis of the causal model. We computationally evaluate the model in 6 domains and measure performance and task prediction accuracy. We report on a study with 120 participants who observe agents playing a real-time strategy game (Starcraft II) and then receive explanations of the agents' behaviour. We investigate: 1) participants' understanding gained by explanations through task prediction; 2) explanation satisfaction and 3) trust. Our results show that causal model explanations perform better on these measures compared to two other baseline explanation models.

AAAI Conference 2020 Conference Paper

Implicit Coordination Using FOND Planning

  • Thorsten Engesser
  • Tim Miller

Epistemic planning can be used to achieve implicit coordination in cooperative multi-agent settings where knowledge and capabilities are distributed between the agents. In these scenarios, agents plan and act on their own without having to agree on a common plan or protocol beforehand. However, epistemic planning is undecidable in general. In this paper, we show how implicit coordination can be achieved in a simpler, propositional setting by using nondeterminism as a means to allow the agents to take the other agents’ perspectives. We identify a decidable fragment of epistemic planning that allows for arbitrary initial state uncertainty and nondeterminism, but where actions can never increase the uncertainty of the agents. We show that in this fragment, planning for implicit coordination can be reduced to a version of fully observable nondeterministic (FOND) planning and that it thus has the same computational complexity as FOND planning. We provide a small case study, modeling the problem of multi-agent path finding with destination uncertainty in FOND, to show that our approach can be successfully applied in practice.

AAMAS Conference 2019 Conference Paper

A Grounded Interaction Protocol for Explainable Artificial Intelligence

  • Prashan Madumal
  • Tim Miller
  • Liz Sonenberg
  • Frank Vetere

Explainable Artificial Intelligence (XAI) systems need to include an explanation model to communicate the internal decisions, behaviours and actions to the interacting humans. Successful explanation involves both cognitive and social processes. In this paper we focus on the challenge of meaningful interaction between an explainer and an explainee and investigate the structural aspects of an interactive explanation to propose an interaction protocol. We follow a bottom-up approach to derive the model by analysing transcripts of different explanation dialogue types with 398 explanation dialogues. We use grounded theory to code and identify key components of an explanation dialogue. We formalize the model using the agent dialogue framework (ADF) as a new dialogue type and then evaluate it in a human-agent interaction study with 101 dialogues from 14 participants. Our results show that the proposed model can closely follow the explanation dialogues of human-agent conversations.

AIJ Journal 2019 Journal Article

Explanation in artificial intelligence: Insights from the social sciences

  • Tim Miller

There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a ‘good’ explanation. There exist vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations to the explanation process. This paper argues that the field of explainable artificial intelligence can build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.

AAMAS Conference 2018 Conference Paper

Combining Planning with Gaze for Online Human Intention Recognition

  • Ronal Singh
  • Tim Miller
  • Joshua Newn
  • Liz Sonenberg
  • Eduardo Velloso
  • Frank Vetere

Intention recognition is the process of using behavioural cues to infer an agent’s goals or future behaviour. People use many behavioural cues to infer others’ intentions, such as deliberative actions, facial expressions, eye gaze, and gestures. In artificial intelligence, two approaches for intention recognition, among others, are gaze-based and model-based intention recognition. Approaches in the former class use gaze to determine which parts of a space a person looks at more often to infer a person’s intention. Approaches in the latter use models of possible future behaviour to rate intentions as more likely if they are a better ‘fit’ to observed actions. In this paper, we propose a novel model of human intention recognition that combines gaze and model-based approaches for online human intention recognition. Gaze data is used to build probability distributions over a set of possible intentions, which are then used as priors in a model-based intention recognition algorithm. In humanbehavioural experiments (n = 20) involving a multi-player board game, we found that adding gaze-based priors to model-based intention recognition more accurately determined intentions (p < 0. 01), determined those intentions earlier (p < 0. 01), and at no additional cost; all compared to a model-based-only approach.

IS Journal 2018 Journal Article

Explaining Explanation, Part 4: A Deep Dive on Deep Nets

  • Robert Hoffman
  • Tim Miller
  • Shane T. Mueller
  • Gary Klein
  • William J. Clancey

This is the fourth in a series of essays about explainable AI. Previous essays laid out the theoretical and empirical foundations. This essay focuses on Deep Nets, and con-siders methods for allowing system users to generate self-explanations. This is accomplished by exploring how the Deep Net systems perform when they are operating at their boundary conditions. Inspired by recent research into adversarial examples that demonstrate the weakness-es of Deep Nets, we invert the purpose of these adversar-ial examples and argue that spoofing can be used as a tool to answer contrastive explanation questions via user-driven exploration.

AAMAS Conference 2018 Conference Paper

Integrated Hybrid Planning and Programmed Control for Real-Time UAV Maneuvering

  • Miquel Ramirez
  • Michael Papasimeon
  • Nir Lipovetzky
  • Lyndon Benke
  • Tim Miller
  • Adrian R. Pearce
  • Enrico Scala
  • Mohammad Zamani

The automatic generation of realistic behaviour such as tactical intercepts for Unmanned Aerial Vehicles (UAV) in air combat is a challenging problem. State-of-the-art solutions propose handś crafted algorithms and heuristics whose performance depends heavily on the initial conditions and aerodynamic properties of the UAVs involved. This paper shows how to employ domainśindependent planners, embedded into professional multiśagent simulations, to implement twoślevel Model Predictive Control (MPC) hybrid control systems for simulated UAVs. We compare the performance of controllers using planners with others based on behaviour trees that implement real world tactics. Our results indicate that hybrid planners derive novel and efective tactics from irst principles inherent to the dynamical constraints UAVs are subject to.

JAIR Journal 2017 Journal Article

Logics of Common Ground

  • Tim Miller
  • Jens Pfau
  • Liz Sonenberg
  • Yoshihisa Kashima

According to Clark's seminal work on common ground and grounding, participants collaborating in a joint activity rely on their shared information, known as common ground, to perform that activity successfully, and continually align and augment this information during their collaboration. Similarly, teams of human and artificial agents require common ground to successfully participate in joint activities. Indeed, without appropriate information being shared, using agent autonomy to reduce the workload on humans may actually increase workload as the humans seek to understand why the agents are behaving as they are. While many researchers have identified the importance of common ground in artificial intelligence, there is no precise definition of common ground on which to build the foundational aspects of multi-agent collaboration. In this paper, building on previously-defined modal logics of belief, we present logic definitions for four different types of common ground. We define modal logics for three existing notions of common ground and introduce a new notion of common ground, called salient common ground. Salient common ground captures the common ground of a group participating in an activity and is based on the common ground that arises from that activity as well as on the common ground they shared prior to the activity. We show that the four definitions share some properties, and our analysis suggests possible refinements of the existing informal and semi-formal definitions.

IJCAI Conference 2017 Conference Paper

Real--Time UAV Maneuvering via Automated Planning in Simulations

  • Miquel Ramírez
  • Michael Papasimeon
  • Lyndon Behnke
  • Nir Lipovetzky
  • Tim Miller
  • Adrian R. Pearce

The automatic generation of realistic behavior such as tactical intercepts for Unmanned Aerial Vehicles (UAV) in air combat is a challenging problem. State-of-the-art solutions propose hand-crafted algorithms and heuristics whose performance depends heavily on the initial conditions and specific aerodynamic characteristics of the UAVs involved. This demo shows the ability of domain-independent planners, embedded into simulators, to generate on-line, feed-forward, control signals that steer simulated aircraft as best suits the situation.

IJCAI Conference 2017 Conference Paper

The Minds of Many: Opponent Modeling in a Stochastic Game

  • Friedrich Burkhard von der Osten
  • Michael Kirley
  • Tim Miller

The Theory of Mind provides a framework for an agent to predict the actions of adversaries by building an abstract model of their strategies using recursive nested beliefs. In this paper, we extend a recently introduced technique for opponent modeling based on Theory of Mind reasoning. Our extended multi-agent Theory of Mind model explicitly considers multiple opponents simultaneously. We introduce a stereotyping mechanism, which segments the agent population into sub-groups of agents with similar behavior. Here, sub-group profiles guide decision making in place of individual agent profiles. We evaluate our model using a multi-player stochastic game, which presents agents with the challenge of unknown adversaries in a partially-observable environment. Simulation results demonstrate that the model performs well under uncertainty and that stereotyping allows larger groups of agents to be modeled robustly. The findings strengthen results showing that Theory of Mind modeling is useful in many artificial intelligence applications.

IJCAI Conference 2016 Conference Paper

Belief Update for Proper Epistemic Knowledge Bases

  • Tim Miller
  • Christian Muise

Reasoning about the nested beliefs of others is important in many multi-agent scenarios. While epistemic and doxastic logics lay a solid groundwork to approach such reasoning, the computational complexity of these logics is often too high for many tasks. Proper Epistemic Knowledge Bases (PEKBs) enforce two syntactic restrictions on formulae to obtain efficient querying: both disjunction and infinitely long nestings of modal operators are not permitted. PEKBs can be compiled, in exponential time, to a prime implicate formula that can be queried in polynomial time, while more recently, it was shown that consistent PEKBs had certain logical properties that meant this compilation was unnecessary, while still retaining polynomial-time querying. In this paper, we present a belief update mechanism for PEKBs that ensures the knowledge base remains consistent when new beliefs are added. This is achieved by first erasing any formulae that contradict these new beliefs. We show that this update mechanism can be computed in polynomial time, and we assess it against the well-known KM postulates for belief update.

AAAI Conference 2016 Conference Paper

Knowing Whether’ in Proper Epistemic Knowledge Bases

  • Tim Miller
  • Paolo Felli
  • Christian Muise
  • Adrian Pearce
  • Liz Sonenberg

Proper epistemic knowledge bases (PEKBs) are syntactic knowledge bases that use multi-agent epistemic logic to represent nested multi-agent knowledge and belief. PEKBs have certain syntactic restrictions that lead to desirable computational properties; primarily, a PEKB is a conjunction of modal literals, and therefore contains no disjunction. Sound entailment can be checked in polynomial time, and is complete for a large set of arbitrary formulae in logics Kn and KDn. In this paper, we extend PEKBs to deal with a restricted form of disjunction: ‘knowing whether’. An agent i knows whether ϕ iff agent i knows ϕ or knows ¬ϕ; that is, iϕ ∨ i¬ϕ. In our experience, the ability to represent that an agent knows whether something holds is useful in many multi-agent domains. We represent knowing whether with a modal operator, Δi, and present sound polynomial-time entailment algorithms on PEKBs with Δi in Kn and KDn, but which are complete for a smaller class of queries than standard PEKBs.

IJCAI Conference 2016 Conference Paper

Planning for a Single Agent in a Multi-Agent Environment Using FOND

  • Christian Muise
  • Paolo Felli
  • Tim Miller
  • Adrian R. Pearce
  • Liz Sonenberg

Single-agent planning in a multi-agent environment is challenging because the actions of other agents can affect our ability to achieve a goal. From a given agent's perspective, actions of others can be viewed as non-deterministic outcomes of that agent's actions. While simple conceptually, this interpretation of planning in a multi-agent environment as non-deterministic planning remains challenging, not only due to the non-determinism resulting from others' actions, but because it is not clear how to compactly model the possible actions of others in the environment. In this paper, we cast the problem of planning in a multi-agent environment as one of Fully-Observable Non-Deterministic (FOND) planning. We extend a non-deterministic planner to plan in a multi-agent setting, allowing non-deterministic planning technology to solve a new class of planning problems. To improve the efficiency in domains too large for solving optimally, we propose a technique to use the goals and possible actions of other agents to focus the search on a set of plausible actions. We evaluate our approach on existing and new multi-agent benchmarks, demonstrating that modelling the other agents' goals improves the quality of the resulting solutions.

AAMAS Conference 2016 Conference Paper

Requirements Specification in the Prometheus Methodology via Activity Diagrams (JAAMAS Extended Abstract)

  • Yoosef Abushark
  • John Thangarajah
  • Tim Miller
  • Michael Winikoff
  • James Harland

In this work we extend a popular agent design methodology, Prometheus, and improve the understandability and maintainability of requirements by automatically generating UML activity diagrams from existing requirements models; namely scenarios and goal hierarchies. The approach is general to all the methodologies that support similar notions in specifying requirements.

JAAMAS Journal 2016 Journal Article

Requirements specification via activity diagrams for agent-based systems

  • Yoosef Abushark
  • Tim Miller
  • James Harland

Abstract Goal-oriented agent systems are increasingly popular for developing complex applications that operate in highly dynamic environments. As with any software these systems have to be designed starting with the specification of system requirements. In this paper, we extend a popular agent design methodology, Prometheus, and improve the understandability and maintainability of requirements by automatically generating UML activity diagrams from existing requirements models; namely scenarios and goal hierarchies. This approach aims to overcome some of the ambiguity present in the current requirements specification in Prometheus and provide more structure for representing variations. Even though our approach is grounded in Prometheus, it can be generalised to all the methodologies that support similar notions in specifying requirements (i. e. notions of goals and scenarios). We present our approach and an evaluation based on user experiments. The evaluation showed that the activity diagram based approach enhances people’s understanding of the requirements, makes it easier to modify requirements, and easier to check them against the detailed design of the agents for coverage.

IJCAI Conference 2015 Conference Paper

Computing Social Behaviours Using Agent Models

  • Paolo Felli
  • Tim Miller
  • Christian Muise
  • Adrian R. Pearce
  • Liz Sonenberg

Agents can be thought of as following a social behaviour, depending on the context in which they are interacting. We devise a computationally grounded mechanism to represent and reason about others in social terms, reflecting the local perspective of an agent (first-person view), to support both stereotypical and empathetic reasoning. We use a hierarchy of agent models to discriminate which behaviours of others are plausible, and decide which behaviour for ourselves is socially acceptable, i. e. conforms to the social context. To this aim, we investigate the implications of considering agents capable of various degrees of theory of mind, and discuss a scenario showing how this affects behaviour.

AAAI Conference 2015 Conference Paper

Planning Over Multi-Agent Epistemic States: A Classical Planning Approach

  • Christian Muise
  • Vaishak Belle
  • Paolo Felli
  • Sheila McIlraith
  • Tim Miller
  • Adrian Pearce
  • Liz Sonenberg

Many AI applications involve the interaction of multiple autonomous agents, requiring those agents to reason about their own beliefs, as well as those of other agents. However, planning involving nested beliefs is known to be computationally challenging. In this work, we address the task of synthesizing plans that necessitate reasoning about the beliefs of other agents. We plan from the perspective of a single agent with the potential for goals and actions that involve nested beliefs, non-homogeneous agents, co-present observations, and the ability for one agent to reason as if it were another. We formally characterize our notion of planning with nested belief, and subsequently demonstrate how to automatically convert such problems into problems that appeal to classical planning technology. Our approach represents an important first step towards applying the well-established field of automated planning to the challenging task of planning involving nested beliefs of multiple agents.

AAMAS Conference 2011 Conference Paper

Substantiating Quality Goals with Field Data for Socially-Oriented Requirements Engineering

  • Sonja Pedell
  • Tim Miller
  • Leon Sterling
  • Frank Vetere
  • Steve Howard
  • Jeni Paay

We propose a method for using ethnographic field data to substantiate agent-based models for socially-oriented systems. We investigate in-situ use of domestic technologies created to encourage fun between grandparents and grandchildren separated by distance. The field data added an understanding of what intergenerational fun means when imbued with concrete activities. Our contribution is twofold. First, we extend the understanding of agent-oriented concepts by applying them to household interactions. Second, we establish a new method for informing quality goals with field data to enable development of novel applications in the domestic domain.

AAMAS Conference 2010 Conference Paper

Characterising and Matching Iterative and Recursive Agent Interaction Protocols

  • Tim Miller
  • Peter McBurney

For an agent to intelligently use specifications of executableprotocols, it is necessary that the agent can quickly and correctly assess the outcomes of that protocol if it is executed. In some cases, this information may be attached to the specification; however, this is not always the case. In this paper, we present an algorithm for deriving characterisations ofprotocols. These characterisations specify the preconditionsunder which the protocol can be executed, and the outcomesof this execution. The algorithm is applicable to definitionswith infinite iteration, and recursive definitions that terminate. We prove how a restricted subset of non-terminatingrecursive protocols can be characterised by rewriting theminto equivalent non-recursive definitions before characterisation. We then define a method for matching protocolsfrom their characterisations. We prove that the complexityof the matching method is less than for methods such as adepth-first search algorithm. Our experimental evaluationconfirms this.

AAMAS Conference 2008 Conference Paper

Annotation and Matching of First-Class Agent Interaction Protocols

  • Tim Miller
  • Peter McBurney

Many practitioners view agent interaction protocols as rigid specifications that are defined a priori, and hard-code their agents with a set of protocols known at design time — an unnecessary restriction for intelligent and adaptive agents. To achieve the full potential of multi-agent systems, we believe that it is important that multi-agent interaction protocols are treated as first-class computational entities in systems. That is, they exist at runtime in systems as entities that can be referenced, inspected, composed, invoked and shared, rather than as abstractions that emerge from the behaviour of the participants. Using first-class protocols, a goal-directed agent can assess a library of protocols at runtime to determine which protocols best achieve a particular goal. In this paper, we present three methods for annotating protocols with their outcomes, and matching protocols using these annotations so that an agent can quickly and correctly find the protocols in its library that achieve a given goal. We discuss the advantages and disadvantages of each of these methods.

KER Journal 2006 Journal Article

Crossing the agent technology chasm: Lessons, experiences and challenges in commercial applications of agents

  • STEVE MUNROE
  • Tim Miller
  • ROXANA A. BELECHEANU
  • Michal Pěchouček
  • Peter McBurney
  • Michael Luck

Agent software technologies are currently still in an early stage of market development, where, arguably, the majority of users adopting the technology are visionaries who have recognized the long-term potential of agent systems. Some current adopters also see short-term net commercial benefits from the technology, and more potential users will need to perceive such benefits if agent technologies are to become widely used. One way to assist potential adopters to assess the costs and benefits of agent technologies is through the sharing of actual deployment histories of these technologies. Working in collaboration with several companies and organizations in Europe and North America, we have studied deployed applications of agent technologies, and we present these case studies in detail in this paper. We also review the lessons learnt, and the key issues arising from the deployments, to guide decision-making in research, in development and in implementation of agent software technologies.