Arrow Research search

Author name cluster

Michael Luck

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

80 papers
2 author rows

Possible papers

80

JAAMAS Journal 2026 Journal Article

A Manifesto for Agent Technology: Towards Next Generation Computing

  • Michael Luck
  • Peter McBurney
  • CHRIS PREIST

Abstract The European Commission's eEurope initiative aims to bring every citizen, home, school, business and administration online to create a digitally literate Europe. The value lies not in the objective itself, but in its ability to facilitate the advance of Europe into new ways of living and working. Just as in the first literacy revolution, our lives will change in ways never imagined. The vision of eEurope is underpinned by a technological infrastructure that is now taken for granted. Yet it provides us with the ability to pioneer radical new ways of doing business, of undertaking science, and, of managing our everyday activities. Key to this step change is the development of appropriate mechanisms to automate and improve existing tasks, to anticipate desired actions on our behalf (as human users) and to undertake them, while at the same time enabling us to stay involved and retain as much control as required. For many, these mechanisms are now being realised by agent technologies, which are already providing dramatic and sustained benefits in several business and industry domains, including B2B exchanges, supply chain management, car manufacturing, and so on. While there are many real successes of agent technologies to report, there is still much to be done in research and development for the full benefits to be achieved. This is especially true in the context of environments of pervasive computing devices that are envisaged in coming years. This paper describes the current state-of-the-art of agent technologies and identifies trends and challenges that will need to be addressed over the next 10 years to progress the field and realise the benefits. It offers a roadmap that is the result of discussions among participants from over 150 organisations including universities, research institutions, large multinational corporations and smaller IT start-up companies. The roadmap identifies successes and challenges, and points to future possibilities and demands; agent technologies are fundamental to the realisation of next generation computing.

AAAI Conference 2026 Conference Paper

Fairness Aware Reinforcement Learning via Proximal Policy Optimization

  • Gabriele La Malfa
  • Jie M. Zhang
  • Michael Luck
  • Elizabeth Black

Fairness in multi-agent systems (MAS) focuses on equitable reward distribution among agents in scenarios involving sensitive attributes such as race, gender, or socioeconomic status. This paper introduces fairness in Proximal Policy Optimization (PPO) with a penalty term derived from a fairness definition such as demographic parity, counterfactual fairness, or conditional statistical parity. The proposed method, which we call Fair-PPO, balances reward maximisation with fairness by integrating two penalty components: a retrospective component that minimises disparities in past outcomes and a prospective component that ensures fairness in future decision-making. We evaluate our approach in two games: the Allelopathic Harvest, a cooperative and competitive MAS focused on resource collection, where some agents possess a sensitive attribute, and HospitalSim, a hospital simulation, in which agents coordinate the operations of hospital patients with different mobility and priority needs. Experiments show that Fair-PPO achieves fairer policies than PPO across the fairness metrics and, through the retrospective and prospective penalty components, reveals a wide spectrum of strategies to improve fairness; at the same time, its performance pairs with that of state-of-the-art fair reinforcement-learning algorithms. Fairness comes at the cost of reduced efficiency, but does not compromise equality among the overall population (Gini index). These findings underscore the potential of Fair-PPO to address fairness challenges in MAS.

JAAMAS Journal 2026 Journal Article

The dMARS Architecture: A Specification of the Distributed Multi-Agent Reasoning System

  • MARK D'INVERNO
  • Michael Luck
  • Michael Wooldridge

Abstract The Procedural Reasoning System (PRS) is the best established agent architecture currently available. It has been deployed in many major industrial applications, ranging from fault diagnosis on the space shuttle to air traffic management and business process control. The theory of PRS-like systems has also been widely studied: within the intelligent agents research community, the belief-desire-intention (BDI) model of practical reasoning that underpins PRS is arguably the dominant force in the theoretical foundations of rational agency. Despite the interest in PRS and BDI agents, no complete attempt has yet been made to precisely specify the behaviour of real PRS systems. This has led to the development of a range of systems that claim to conform to the PRS model, but which differ from it in many important respects. Our aim in this paper is to rectify this omission. We provide an abstract formal model of an idealised dMARS system (the most recent implementation of the PRS architecture), which precisely defines the key data structures present within the architecture and the operations that manipulate these structures. We focus in particular on dMARS plans, since these are the key tool for programming dMARS agents. The specification we present will enable other implementations of PRS to be easily developed, and will serve as a benchmark against which future architectural enhancements can be evaluated.

NeurIPS Conference 2025 Conference Paper

Large Language Models Miss the Multi-agent Mark

  • Emanuele La Malfa
  • Gabriele La Malfa
  • Samuele Marro
  • Jie Zhang
  • Elizabeth Black
  • Michael Luck
  • Philip Torr
  • Michael Wooldridge

Recent interest in Multi-Agent Systems of Large Language Models (MAS LLMs) has led to an increase in frameworks leveraging multiple LLMs to tackle complex tasks. However, much of this literature appropriates the terminology of MAS without engaging with its foundational principles. In this position paper, we highlight critical discrepancies between MAS theory and current MAS LLMs implementations, focusing on four key areas: the social aspect of agency, environment design, coordination and communication protocols, and measuring emergent behaviours. Our position is that many MAS LLMs lack multi-agent characteristics such as autonomy, social interaction, and structured environments, and often rely on oversimplified, LLM-centric architectures. The field may slow down and lose traction by revisiting problems the MAS literature has already addressed. Therefore, we systematically analyse this issue and outline associated research opportunities; we advocate for better integrating established MAS concepts and more precise terminology to avoid mischaracterisation and missed opportunities.

IJCAI Conference 2025 Conference Paper

Quantifying the Self-Interest Level of Markov Social Dilemmas

  • Richard Willis
  • Yali Du
  • Joel Z. Leibo
  • Michael Luck

This paper introduces a novel method for estimating the self-interest level of Markov social dilemmas. We extend the concept of self-interest level from normal-form games to Markov games, providing a quantitative measure of the minimum reward exchange required to align individual and collective interests. We demonstrate our method on three environments from the Melting Pot suite, representing either common-pool resources or public goods. Our results illustrate how reward exchange can enable agents to transition from selfish to collective equilibria in a Markov social dilemma. This work contributes to multi-agent reinforcement learning by providing a practical tool for analysing complex, multistep social dilemmas. Our findings offer insights into how reward structures can promote or hinder cooperation, with potential applications in areas such as mechanism design.

AAMAS Conference 2025 Conference Paper

Resolving Social Dilemmas with Minimal Reward Transfer - Extended Abstract

  • Richard Willis
  • Yali Du
  • Joel Z. Leibo
  • Michael Luck

In this paper we introduce a novel metric, the general self-interest level, to quantify the disparity between individual and group rationality in social dilemma games. This metric represents the maximum proportion of their individual rewards that agents can retain while guaranteeing that a social welfare optimum is achieved. This work provides both a tool for describing social dilemmas and a prescriptive solution for resolving them via reward transfer contracts. In contrast to existing metrics, the general self-interest level can enable more efficient solutions to be found. Applications include mechanism design, where we can assess the impact on collective behaviour of modifications to models of environments.

AAMAS Conference 2024 Conference Paper

Combining Theory of Mind and Abductive Reasoning in Agent-Oriented Programming

  • Nieves Montes
  • Michael Luck
  • Nardine Osman
  • Odinaldo Rodrigues
  • Carles Sierra

In this paper we present TomAbd, a novel agent model extending the BDI architecture with Theory of Mind capabilities, i. e. the capacity to adopt and reason from the perspective of others. By combining the Theory of Mind of TomAbd agents with abductive reasoning, agents can infer explanations for the behaviour of others, which they can incorporate into their own decision-making. We have implemented the TomAbd agent model and successfully tested its performance in the cooperative board game Hanabi.

ECAI Conference 2024 Conference Paper

Explorative Imitation Learning: A Path Signature Approach for Continuous Environments

  • Nathan Gavenski
  • Juarez Monteiro
  • Felipe Meneguzzi
  • Michael Luck
  • Odinaldo Rodrigues

Some imitation learning methods combine behavioural cloning with self-supervision to infer actions from state pairs. However, most rely on a large number of expert trajectories to increase generalisation and human intervention to capture key aspects of the problem, such as domain constraints. In this paper, we propose Continuous Imitation Learning from Observation (CILO), a new method augmenting imitation learning with two important features: (i) exploration, allowing for more diverse state transitions, requiring less expert trajectories and resulting in fewer training iterations; and (ii) path signatures, allowing for automatic encoding of constraints, through the creation of non-parametric representations of agents and expert trajectories. We compared CILO with a baseline and two leading imitation learning methods in five environments. It had the best overall performance of all methods in all environments, outperforming the expert in two of them.

AAMAS Conference 2024 Conference Paper

Imitation Learning Datasets: A Toolkit For Creating Datasets, Training Agents and Benchmarking

  • Nathan Gavenski
  • Michael Luck
  • Odinaldo Rodrigues

Imitation learning field requires expert data to train agents in a task. Most often, this learning approach suffers from the absence of available data, which results in techniques being tested on its dataset. Creating datasets is a cumbersome process requiring researchers to train expert agents from scratch, record their interactions and test each benchmark method with newly created data. Moreover, creating new datasets for each new technique results in a lack of consistency in the evaluation process since each dataset can drastically vary in state and action distribution. In response, this work aims to address these issues by creating Imitation Learning Datasets, a toolkit that allows for: (i) curated expert policies with multithreaded support for faster dataset creation; (ii) readily available datasets and techniques with precise measurements; and (iii) sharing implementations of common imitation learning techniques. Demonstration link: https: //nathangavenski. github. io/#/il-datasets-video

AAMAS Conference 2024 Conference Paper

Multi-user Norm Consensus

  • Marc Serramia
  • Natalia Criado
  • Michael Luck

Many agents act in environments with multiple human users, from care robots to smart assistants. When interacting in multi-user environments it is paramount that these agents act as all users expect. However, it is not always possible to have well-defined collective preferences, nor to easily infer them from individual preferences. This is especially true in fast changing environments, like a device placed in a public space where users can enter and exit freely. In response, this paper proposes a model to represent individual preferences about the behaviour of an agent and a mechanism to find multi-user consensuses over these preferences. Norms can then be generated to ensure that when the agent follows them it will act according to the preferences of all users. We formalise what a consensus norm is and what properties the set of consensus norms should satisfy (i. e. generate the minimum number of norms while maximising the coverage of user preferences). We provide an optimisation approach to find this set of norms and show that our approach satisfies the aforementioned properties.

JAAMAS Journal 2024 Journal Article

Resolving social dilemmas with minimal reward transfer

  • Richard Willis
  • Yali Du
  • Michael Luck

Abstract Social dilemmas present a significant challenge in multi-agent cooperation because individuals are incentivised to behave in ways that undermine socially optimal outcomes. Consequently, self-interested agents often avoid collective behaviour. In response, we formalise social dilemmas and introduce a novel metric, the general self-interest level, to quantify the disparity between individual and group rationality in such scenarios. This metric represents the maximum proportion of their individual rewards that agents can retain while ensuring that a social welfare optimum becomes a dominant strategy. Our approach diverges from traditional concepts of altruism, instead focusing on strategic reward redistribution. By transferring rewards among agents in a manner that aligns individual and group incentives, rational agents will maximise collective welfare while pursuing their own interests. We provide an algorithm to compute efficient transfer structures for an arbitrary number of agents, and introduce novel multi-player social dilemma games to illustrate the effectiveness of our method. This work provides both a descriptive tool for analysing social dilemmas and a prescriptive solution for resolving them via efficient reward transfer contracts. Applications include mechanism design, where we can assess the impact on collaborative behaviour of modifications to models of environments.

AAAI Conference 2024 System Paper

SemLa: A Visual Analysis System for Fine-Grained Text Classification

  • Munkhtulga Battogtokh
  • Cosmin Davidescu
  • Michael Luck
  • Rita Borgo

Fine-grained text classification requires models to distinguish between many fine-grained classes that are hard to tell apart. However, despite the increased risk of models relying on confounding features and predictions being especially difficult to interpret in this context, existing work on the interpretability of fine-grained text classification is severely limited. Therefore, we introduce our visual analysis system, SemLa, which incorporates novel visualization techniques that are tailored to this challenge. Our evaluation based on case studies and expert feedback shows that SemLa can be a powerful tool for identifying model weaknesses, making decisions about data annotation, and understanding the root cause of errors.

IJCAI Conference 2024 Conference Paper

The Role of Perception, Acceptance, and Cognition in the Usefulness of Robot Explanations

  • Hana Kopecka
  • Jose Such
  • Michael Luck

It is known that when interacting with explainable autonomous systems, user characteristics are important in determining the most appropriate explanation, but understanding which user characteristics are most relevant to consider is not simple. This paper explores such characteristics and analyses how they affect the perceived usefulness of four types of explanations based on the robot's mental states. These types are belief, goal, hybrid (goal and belief) and baseline explanations. In this study, the explanations were evaluated in the context of a domestic service robot. The user characteristics considered are the perception of the robot's rationality and autonomy, the acceptance of the robot and the user's cognitive tendencies. We found differences in perceived usefulness between explanation types based on user characteristics, with hybrid explanations being the most useful.

JAAMAS Journal 2023 Journal Article

Combining theory of mind and abductive reasoning in agent-oriented programming

  • Nieves Montes
  • Michael Luck
  • Carles Sierra

Abstract This paper presents a novel model, called T om A bd, that endows autonomous agents with Theory of Mind capabilities. T om A bd agents are able to simulate the perspective of the world that their peers have and reason from their perspective. Furthermore, T om A bd agents can reason from the perspective of others down to an arbitrary level of recursion, using Theory of Mind of \(n^{\text {th}}\) order. By combining the previous capability with abductive reasoning, T om A bd agents can infer the beliefs that others were relying upon to select their actions, hence putting them in a more informed position when it comes to their own decision-making. We have tested the T om A bd model in the challenging domain of Hanabi, a game characterised by cooperation and imperfect information. Our results show that the abilities granted by the T om A bd model boost the performance of the team along a variety of metrics, including final score, efficiency of communication, and uncertainty reduction.

AAMAS Conference 2023 Conference Paper

Predicting Privacy Preferences for Smart Devices as Norms

  • Marc Serramia
  • William Seymour
  • Natalia Criado
  • Michael Luck

Smart devices, such as smart speakers, are becoming ubiquitous, and users expect these devices to act in accordance with their preferences. In particular, since these devices gather and manage personal data, users expect them to adhere to their privacy preferences. However, the current approach of gathering these preferences consists in asking the users directly, which usually triggers automatic responses failing to capture their true preferences. In response, in this paper we present a collaborative filtering approach to predict user preferences as norms. These preference predictions can be readily adopted or can serve to assist users in determining their own preferences. Using a dataset of privacy preferences of smart assistant users, we test the accuracy of our predictions.

NeSy Conference 2022 Conference Paper

From Subsymbolic to Symbolic: A Blueprint for Investigation

  • Joseph Pober
  • Michael Luck
  • Odinaldo Rodrigues

In this paper, we sketch a framework for integration between subsymbolic and symbolic representations, consisting of a series of layers and mappings between elements across the layers. Each layer corresponds to a particular level of abstraction about phenomena in the environment being observed in the layers below. Through an iterative process, the differences between the elements in successive iterations within a given layer are captured as transformations between the elements and used for identification and recognition of objects as well as prediction and verification of the environment in future iterations. A bridge between the subsymbolic and symbolic levels can be built by successively adding layers at ever more sophisticated levels of abstraction. This approach aims to benefit from subsymbolic learning, while harnessing the abstraction and reasoning powers of classical symbolic AI techniques.

KER Journal 2019 Journal Article

Time-sensitive resource re-allocation strategy for interdependent continuous tasks

  • Valeriia Haberland
  • Simon Miles
  • Michael Luck

Abstract An increase in volumes of data and a shift towards live data enabled a stronger focus on resource-intensive tasks which run continuously over long periods. A Grid has potential to offer the required resources for these tasks, while considering a fair and balanced allocation of resources among multiple client agents. Taking this into account, a Grid might be unwilling to allocate its resources for long time, leading to task interruptions. This problem becomes even more serious if an interruption of one task may lead to the interruption of dependent tasks. Here, we discuss a new strategy for resource re-allocation which is utilized by a client with the aim to prevent too long interruptions by re-allocating resources between its own tasks. Those re-allocations are suggested by a client agent, but only a Grid can re-allocate resources if agreed. Our strategy was tested under the different Grid settings, accounting for the adjusted coefficients, and demonstrated noticeable improvements in client utilities as compared to when it is not considered. Our experiment was also extended to tests with environmental modelling and realistic Grid resource simulation, grounded in real-life Grid studies. These tests have also shown a useful application of our strategy.

KER Journal 2017 Journal Article

Engineering the emergence of norms: a review

  • Chris Haynes
  • Michael Luck
  • Peter McBurney
  • Samhar Mahmoud
  • Tomáš Vítek
  • Simon Miles

Abstract Complex systems often exhibit emergent behaviour, unexpected macro-level behaviour caused by the interaction of micro-level components. In multiagent systems, these micro-level components may be autonomous agents and the emergent behaviour may be expressed as norms—patterns of behaviour that arise among the agents in response to their environment and each other. These emergent norms may be beneficial (e.g. by encouraging cooperative behaviour), or detrimental, but in either case it is useful to recognize these norms as they emerge and either encourage or discourage their establishment. We term this process engineering the emergence of norms and have identified three steps: the identification of a possible norm, evaluation of its benefit and its encouragement (or discouragement). This paper is an attempt to provide a survey of existing research related to these steps. We also provide an analysis of the approaches based upon their suitability for a variety of normative systems: we examine the requirements for agents to have autonomy over their choice of norms, the degree of observability required in the system, and the norm enforcement methods. The paper concludes with an discussion of open issues.

JAAMAS Journal 2017 Journal Article

Establishing norms with metanorms over interaction topologies

  • Samhar Mahmoud
  • Nathan Griffiths
  • Michael Luck

Abstract Norms are a valuable means of establishing coherent cooperative behaviour in decentralised systems in which there is no central authority. Axelrod’s seminal model of norm establishment in populations of self-interested individuals provides some insight into the mechanisms needed to support this through the use of metanorms, but considers only limited scenarios and domains. While further developments of Axelrod’s model have addressed some of the limitations, there is still only limited consideration of such metanorm models with more realistic topological configurations. In response, this paper tries to address such limitation by considering its application to different topological structures. Our results suggest that norm establishment is achievable in lattices and small worlds, while such establishment is not achievable in scale-free networks, due to the problematic effects of hubs. The paper offers a solution, first by adjusting the model to more appropriately reflect the characteristics of the problem, and second by offering a new dynamic policy adaptation approach to learning the right behaviour. Experimental results demonstrate that this dynamic policy adaptation overcomes the difficulties posed by the asymmetric distribution of links in scale-free networks, leading to an absence of norm violation, and instead to norm emergence.

AAMAS Conference 2016 Conference Paper

Cooperation Emergence under Resource-Constrained Peer Punishment

  • Samhar Mahmoud
  • Simon Miles
  • Michael Luck

In distributed computational systems with no central authority, social norms have shown great potential in regulating the behaviour of self-interested agents, due to their distributed cost. In this context, peer punishment has been an important instrument in enabling social norms to emerge, and such punishment is usually assigned a certain enforcement cost that is paid by agents applying it. However, models that investigate the use of punishment as a mechanism to allow social norms to emerge usually assume that unlimited resources are available to agents to cope with the resulting enforcement costs, yet this assumption may not hold in real world computational systems, since resources are typically limited and thus need to be used optimally. In this paper, we use a modified version of the metanorm model originally proposed by Axelrod [1] to investigate this, and show that it allows norm emergence only in limited cases under bounded resources. In response, we propose a resource-aware adaptive punishment technique to address this limitation, and give an experimental evaluation of the new technique that shows it enables norm establishment under limited resources.

EUMAS Conference 2016 Conference Paper

Resource Re-allocation for Data Inter-dependent Continuous Tasks in Grids

  • Valeriia Haberland
  • Simon Miles
  • Michael Luck

Abstract Many researchers focus on resource intensive tasks which have to be run continuously over long periods. A Grid may offer resources for these tasks, but they are contested by multiple client agents. Hence, a Grid might be unwilling to allocate its resources for long terms, leading to tasks’ interruptions. This issue becomes more substantial when tasks are data inter-dependent, where one interrupted task may cause an interruption of a bundle of other tasks. In this paper, we discuss a new resource re-allocation strategy for a client, in which resources are re-allocated between the client tasks in order to avoid prolonged interruptions. Those re-allocations are decided by a client agent, but they should be agreed with a Grid and can be performed only by a Grid. Our strategy has been tested within different Grid environments and noticeably improves client utilities in almost all cases.

JAAMAS Journal 2015 Journal Article

A coherence maximisation process for solving normative inconsistencies

  • Natalia Criado
  • Elizabeth Black
  • Michael Luck

Abstract Norms can be used in multi-agent systems for defining patterns of behaviour in terms of permissions, prohibitions and obligations that are addressed to agents playing a specific role. Agents may play different roles during their execution and they may even play different roles simultaneously. As a consequence, agents may be affected by inconsistent norms; e. g. , an agent may be simultaneously obliged and forbidden to reach a given state of affairs. Dealing with this type of inconsistency is one of the main challenges of normative reasoning. Existing approaches tackle this problem by using a static and predefined order that determines which norm should prevail in the case where two norms are inconsistent. One main drawback of these proposals is that they allow only pairwise comparison of norms; it is not clear how agents may use the predefined order to select a subset of norms to abide by from a set of norms containing multiple inconsistencies. Furthermore, in dynamic and non-deterministic environments it can be difficult or even impossible to specify an order that resolves inconsistencies satisfactorily in all potential situations. In response to these two problems, we propose a mechanism with which an agent can dynamically compute a preference order over subsets of its competing norms by considering the coherence of its cognitive and normative elements. Our approach allows flexible resolution of normative inconsistencies, tailored to the current circumstances of the agent. Moreover, our solution can be used to determine norm prevalence among a set of norms containing multiple inconsistencies.

KER Journal 2015 Journal Article

An introduction to reasoning over qualitative multi-attribute preferences

  • Ingrid Nunes
  • Simon Miles
  • Michael Luck
  • Carlos J. P. Lucena

Abstract Research on preferences has significantly increased in recent years, as it involves not only many subproblems to be investigated, such as elicitation, representation, and reasoning, but has also been the target of different research areas, for example, artificial intelligence and databases. In particular, much work has focused on qualitative preferences, because these are closer to the way people express their preferences in comparison with quantitative preferences. Against this background, a large number of approaches have been proposed, associated with heterogeneous areas, so that these approaches are usually just compared with those of the same area. In response, we present in this paper a survey of approaches to qualitative multi-attribute preference reasoning, covering different research areas. We introduce selected approaches that propose different techniques and algorithms, which take as input qualitative multi-attribute preference statements following a particular structure specified by the approach. We analyse each approach in a systematic way and discuss their commonalities and limitations.

AILAW Journal 2015 Journal Article

Establishing norms with metanorms in distributed computational systems

  • Samhar Mahmoud
  • Nathan Griffiths
  • Jeroen Keppens
  • Adel Taweel
  • Trevor J. M. Bench-Capon
  • Michael Luck

Abstract Norms provide a valuable mechanism for establishing coherent cooperative behaviour in decentralised systems in which there is no central authority. One of the most influential formulations of norm emergence was proposed by Axelrod (Am Political Sci Rev 80(4): 1095–1111, 1986 ). This paper provides an empirical analysis of aspects of Axelrod’s approach, by exploring some of the key assumptions made in previous evaluations of the model. We explore the dynamics of norm emergence and the occurrence of norm collapse when applying the model over extended durations. It is this phenomenon of norm collapse that can motivate the emergence of a central authority to enforce laws and so preserve the norms, rather than relying on individuals to punish defection. Our findings identify characteristics that significantly influence norm establishment using Axelrod’s formulation, but are likely to be of importance for norm establishment more generally. Moreover, Axelrod’s model suffers from significant limitations in assuming that private strategies of individuals are available to others, and that agents are omniscient in being aware of all norm violations and punishments. Because this is an unreasonable expectation, the approach does not lend itself to modelling real-world systems such as online networks or electronic markets. In response, the paper proposes alternatives to Axelrod’s model, by replacing the evolutionary approach, enabling agents to learn, and by restricting the metapunishment of agents to cases where the original defection is observed, in order to be able to apply the model to real-world domains. This work can also help explain the formation of a “social contract” to legitimate enforcement by a central authority.

AILAW Journal 2015 Journal Article

Monitoring compliance with E-contracts and norms

  • Sanjay Modgil
  • Nir Oren
  • Noura Faci
  • Felipe Meneguzzi
  • Simon Miles
  • Michael Luck

Abstract The behaviour of autonomous agents may deviate from that deemed to be for the good of the societal systems of which they are a part. Norms have therefore been proposed as a means to regulate agent behaviours in open and dynamic systems, where these norms specify the obliged, permitted and prohibited behaviours of agents. Regulation can effectively be achieved through use of enforcement mechanisms that result in a net loss of utility for an agent in cases where the agent’s behaviour fails to comply with the norms. Recognition of compliance is thus crucial for achieving regulation. In this paper, we propose a general framework for observation of agents’ behaviour, and recognition of this behaviour as constituting, or counting as, compliance or violation. The framework deploys monitors that receive inputs from trusted observers, and processes these inputs together with transition network representations of individual norms. In this way, monitors determine the fulfillment or violation status of norms. The paper also describes a proof of concept implementation of the framework, and its deployment in electronic contracting environments.

JAAMAS Journal 2015 Journal Article

Negotiation strategy for continuous long-term tasks in a grid environment

  • Valeriia Haberland
  • Simon Miles
  • Michael Luck

Abstract Nowadays, much research is concerned with execution of long-term continuous tasks, which produce data in real time, e. g. monitoring applications. These tasks can be run for months or years and they are usually resource intensive in terms of the large amounts of data which is processed per time unit. A Grid can potentially provide the amount of resources necessary to execute these tasks, but it might prove to be impossible or non-beneficial for a Grid to allocate resources for such long durations as these resources can be also requested by other clients or might join a Grid only for some periods of time. To resolve these differences, a client and a Grid Resource Allocator negotiate, and a client has to agree for a shorter execution period at the end of which it needs to negotiate again. In this paper, we discuss in detail a decision-making mechanism for a client as part of its negotiation strategy, which aims to increase the duration of execution periods and to decrease the duration of interruptions. This new strategy, ConTask, has been tested on a realistic Grid resource simulator, and it demonstrates better utilities than our strategy which has not been specifically designed for continuous tasks under various conditions.

EUMAS Conference 2015 Invited Paper

Probationary Contracts: Reducing Risk in Norm-Based Systems

  • Chris Haynes
  • Simon Miles
  • Michael Luck

Abstract In human organisations, it is common to subject a new employees to periods of probation for which additional restrictions or oversight apply in order to reduce the consequences of poor recruitment choice. In a similar way, multi-agent organisations may need to employ agents of unknown trustworthiness to perform services defined by contracts (or sets of norms), yet these agents may violate the norms for their own advantage. Here, the risk of employing such agents depends on the agents trustworthiness and the consequences of norm violation. In response, in this paper we propose the use of probationary contracts, generated by adding obligations to standard contracts in order to further constrain agent behaviour. We evaluate our work using agent-based simulations of abstract tasks, and present results showing that using probationary roles reduces the risk of using unknown agents, especially where violating a norm has serious consequences.

ECAI Conference 2014 Conference Paper

Information-based Incentivisation when Rewards are Inadequate

  • Samhar Mahmoud
  • Lina Barakat
  • Simon Miles
  • Adel Taweel
  • Brendan Delaney
  • Michael Luck

In many cases, intermediaries play a major role in linking between service providers and their target users. Yet, attracting intermediaries at a marketplace to promote a service to their existing customers can be very challenging, since they are usually very busy and would incur additional cost as a result of such promotion. In response, this paper presents an information-based incentivisation framework, which combines financial rewards with other motivating information, in order to incentivise intermediaries at a marketplace to undertake service promotion. Specifically, the intermediaries are associated with a group of incentivising agents, capable of learning the individual motivational needs of these intermediaries, and accordingly target them with the most effective incentives. The incentivising agents collaborate with each other to gather motivational information, by sharing their observations on intermediaries. The proposed incentivisation approach is evaluated through a corresponding agent-based simulation, and the experimental results obtained demonstrate its effectiveness.

ECAI Conference 2014 Conference Paper

Negotiation to Execute Continuous Long-Term Tasks

  • Valeriia Haberland
  • Simon Miles
  • Michael Luck

Recently, research has focused on processing tasks that require continuous execution to produce data in a real-time manner. Such tasks often also need to be executed for long periods of time such as years, requiring large amounts of resources (e. g. CPUs) that can be found in a Grid. However, a Grid may be unwilling or unable to allocate resources for continuous usage far in advance, because of high fluctuations in resource availability and/or resource demand. Therefore, a client must relax its requirements in terms of long-term execution, and negotiate a shorter period of execution time; when this period ends, the client must negotiate again to continue task's execution. We propose a negotiation strategy, ConTask, which helps to increase the periods of execution time, and reduce the length of interruptions between them.

ECAI Conference 2014 Conference Paper

Pattern-based Explanation for Automated Decisions

  • Ingrid Nunes
  • Simon Miles
  • Michael Luck
  • Simone Diniz Junqueira Barbosa
  • Carlos Lucena

Explanations play an essential role in decision support and recommender systems as they are directly associated with the acceptance of those systems and the choices they make. Although approaches have been proposed to explain automated decisions based on multi-attribute decision models, there is a lack of evidence that they produce the explanations users need. In response, in this paper we propose an explanation generation technique, which follows user-derived explanation patterns. It receives as input a multi-attribute decision model, which is used together with user-centric principles to make a decision to which an explanation is generated. The technique includes algorithms that select relevant attributes and produce an explanation that justifies an automated choice. An evaluation with a user study demonstrates the effectiveness of our approach.

IJCAI Conference 2013 Conference Paper

Communicating Open Systems (Extended Abstract)

  • MARK D'INVERNO
  • Michael Luck
  • Pablo Noriega
  • Juan A. Rodriguez-Aguilar
  • Carles Sierra

Just as conventional institutions are organisational structures for coordinating the activities of multiple interacting individuals, electronic institutions provide a computational analogue for coordinating the activities of multiple interacting software agents. In this paper, we argue that open multi-agent systems can be effectively designed and implemented as electronic institutions, for which we provide a comprehensive computational model. More specifically, the paper provides an operational semantics for electronic institutions, specifying the essential data structures, the state representation and the key operations necessary to implement them.

ECAI Conference 2012 Conference Paper

Efficient Norm Emergence through Experiential Dynamic Punishment

  • Samhar Mahmoud
  • Nathan Griffiths
  • Jeroen Keppens
  • Michael Luck

Peer punishment has been an effective means to ensure that norms are complied with in a population of self-interested agents. However, current approaches to establishing norms have only considered static punishments, which do not vary with the magnitude or frequency of norm violation. Such static punishments are difficult to apply because it is difficult to identify an appropriate fixed penalty: one that is not too weak to disincentivise norm violations and not too strong to lead to significant deleterious effects on the system as a whole (such as those incurred by losing the benefits of a member of the population). This paper addresses this concern by developing an adaptive punishment technique that tailors penalty to norm violation. An experimental evaluation of the approach demonstrates its value compared to static punishment. In particular, the results show that our dynamic punishment technique is capable of achieving norm emergence, even when starting with an amount of punishment that is too low to achieve emergence in the traditional static approach.

KER Journal 2012 Journal Article

Norms, organizations, and semantics

  • Olivier Boissier
  • Marco Colombetti
  • Michael Luck
  • John-Jules Meyer
  • Axel Polleres

Abstract This paper integrates the responses to a set of questions from a distinguished set of panelists involved in a discussion at the Agreement Technologies workshop in Cyprus in December 2009. The panel was concerned with the relationship between the research areas of semantics, norms, and organizations, and the ways in which each may contribute to the development of the others in support of next generation agreement technologies.

AAMAS Conference 2012 Conference Paper

User-Centric Preference-Based Decision Making

  • Ingrid Nunes
  • Simon Miles
  • Michael Luck
  • Carlos de Lucena

The automation of user tasks by agents may involve decision making that must take into account user preferences. This paper introduces a decision making technique that reasons about preferences and priorities expressed in a high-level language in order to choose an option from the set of those available. Our technique includes principles from psychology, concerning the way in which humans make decisions. Our preference language is informed by a user study on preference expression, which is also used to evaluate our approach by comparing our results with those provided by a human expert. The evaluation indicates that our technique makes choices on behalf of the user with as good quality as made by the expert.

JAAMAS Journal 2011 Journal Article

Evolutionary testing of autonomous software agents

  • Cu D. Nguyen
  • Simon Miles
  • Michael Luck

Abstract A system built in terms of autonomous software agents may require even greater correctness assurance than one that is merely reacting to the immediate control of its users. Agents make substantial decisions for themselves, so thorough testing is an important consideration. However, autonomy also makes testing harder; by their nature, autonomous agents may react in different ways to the same inputs over time, because, for instance they have changeable goals and knowledge. For this reason, we argue that testing of autonomous agents requires a procedure that caters for a wide range of test case contexts, and that can search for the most demanding of these test cases, even when they are not apparent to the agents’ developers. In this paper, we address this problem, introducing and evaluating an approach to testing autonomous agents that uses evolutionary optimisation to generate demanding test cases. We propose a methodology to derive objective (fitness) functions that drive evolutionary algorithms, and evaluate the overall approach with two simulated autonomous agents. The obtained results show that our approach is effective in finding good test cases automatically.

AAMAS Conference 2010 Conference Paper

A Model of Normative Power

  • Nir Oren
  • Michael Luck
  • Simon Miles

A power describes the ability of an agent to act in some way. Whilethis notion of power is critical in the context of organisational dynamics, and has been studied by others in this light, it must beconstrained so as to be useful in any practical application. In particular, we are concerned with how power may be used by agents togovern the imposition and management of norms, and how agentsmay dynamically assign norms to other agents within a multi-agentsystem. We approach the problem by defining a syntax and semantics for powers governing the creation, deletion, or modification ofnorms within a system, which we refer to as normative powers. Wethen extend this basic model to accommodate more general powers that can modify other powers within the system, and describehow agents playing certain roles are able to apply powers, changingthe system's norms, and also the powers themselves. We examinehow the powers found within a system may change as the statusof norms change, and show how standard norm modification operations - such as the derogation, annulment and modification ofnorms - may be represented within our system.

AAMAS Conference 2010 Conference Paper

A Simulation Approach to Design Contracts that Govern Emergent Multi-Agent Systems

  • Ma
  • iacute; ra Gatti
  • Simon Miles
  • Nir Oren
  • Michael Luck
  • Carlos Lucena

Governing the behavior of autonomous agents in multi-agent systems to reach overall system benefit has long been an active area of research. One approach of recent prevalence is to provide agents with explicit specifications of what they should, should not or may do within the system, i. e. normative statements or norms. In a business setting, these norms exactly mirror the contractual agreements made between business organizations. As such, agent-based normative systems offer the potential for a business to model, understand the consequences of, and then refine contracts to improve the outcomes for that business. However, languages and tools for specifying norms do not by themselves provide understanding of the emergent behavior in a complex domain. In this paper, we combine a simulation technique designed for investigating and tuning emergent behavior in multi-agent systems with an approach to modeling norms of the complexity found in business contracts. We show, using an aerospace case study, that our approach can aid in the refinement of such contracts by exposing the consequences of contract variations.

AAMAS Conference 2010 Conference Paper

Changing Neighbours: Improving Tag-Based Cooperation

  • Nathan Griffiths
  • Michael Luck

In systems of autonomous self-interested agents, in whichagents' neighbourhoods are defined by their connections toothers, cooperation can arise through observation of the behaviour of neighbours to determine values of trust and reputation. While there are many techniques for encouragingcooperative behaviour within such systems, they often require a centralised authority or rely on reciprocity that isnot always available. In response, this paper presents a decentralised mechanism to supporting cooperation withoutrequiring reciprocity. The mechanism is based on tag-basedcooperation, supplemented by assessing neighbourhood context and using simple rewiring to cope with cheaters. Inparticular, the paper makes two key contributions. First, itprovides a technique for increasing resilience in the face ofmalicious behaviour by enabling individuals to rewire theirconnections to others and so modify their neighbourhoods. Second, it provides an empirical analysis of several strategiesfor rewiring, evaluating them through simulations.

AAMAS Conference 2010 Conference Paper

Graphically Explaining Norms

  • Madalina Croitoru
  • Nir Oren
  • Simon Miles
  • Michael Luck

While much work has focused on the creation of norm aware agents, much less has been concerned with aiding a system's designers inunderstanding the effects of norms on a system. However, sincenorms are generally pre-determined by designers, providing suchsupport can be critical in enabling norm refinement for more effective or efficient system regulation. In this paper, we address just thisproblem by providing explanations as to why some norm is applicable, violated, or in some other state. We make use of conceptualgraph based semantics to provide an easily interpretable graphicalrepresentation of the norms within a system. Such a representationallows for visual explanation of the state of norms, showing forexample why they may have been activated or violated. These explanations then enables easy understanding of the system operationwithout needing to follow the system's underlying logic.

AAMAS Conference 2009 Conference Paper

A Framework for Monitoring Agent-Based Normative Systems

  • Sanjay Modgil
  • Noura Faci
  • Felipe Meneguzzi
  • Nir Oren
  • Simon Miles
  • Michael Luck

The behaviours of autonomous agents may deviate from those deemed to be for the good of the societal systems of which they are a part. Norms have therefore been proposed as a means to regulate agent behaviours in open and dynamic systems, where these norms specify the obliged, permitted and prohibited behaviours of agents. Regulation can effectively be achieved through use of enforcement mechanisms that result in a net loss of utility for an agent in cases where the agent’s behaviour fails to comply with the norms. Recognition of compliance is thus crucial for achieving regulation. In this paper we propose a generic architecture for observation of agent behaviours, and recognition of these behaviours as constituting, or counting as, compliance or violation. The architecture deploys monitors that receive inputs from observers, and processes these inputs together with transition network representations of individual norms. In this way, monitors determine the fulfillment or violation status of norms. The paper also describes a proof of concept implementation and deployment of monitors in electronic contracting environments.

AAMAS Conference 2009 Conference Paper

Emergent Service Provisioning and Demand Estimation through Self-Organizing Agent Communities

  • Mariusz Jacyno
  • Seth Bullock
  • Michael Luck
  • TERRY R. PAYNE

A major challenge within open markets is the ability to satisfy service demand with an adequate supply of service providers, especially when such demand may be volatile due to changing requirements, or fluctuations in the availability of services. Ideally, this supply and demand should be balanced; however, when consumer demand changes over time, and providers independently choose which services they provide, a coordination problem known as ‘herding’ can arise bringing instability to the market. This behavior can emerge when consumers share similar preferences for the same providers, and thus compete for the same resources. Likewise, providers which share estimates of fluctuating demand may respond in unison, withdrawing some services to introduce others, and thus oscillate the available supply around some ideal equilibrium. One approach to avoid this unstable behavior is to limit the flow of information between agents, such that they possess an incomplete and subjective view of the local service availability. We propose a model of an adaptive service-offering mechanism, in which providers adapt their choice of services offered to consumers, based on perceived demand. By varying the volume of information shared by agents, we demonstrate that a co-adaptive equilibrium can be achieved, thus avoiding the herding problem. As the knowledge that agents possess is limited, they self-organise into community structures that support locally shared information. We demonstrate that such a model is capable of reducing instability in service demand and thus increase utility (based on successful service provision) by up to 59%, when compared to the use of globally available information.

AAMAS Conference 2009 Conference Paper

Evolutionary Testing of Autonomous Software Agents

  • Cu D. Nguyen
  • Anna Perini
  • Paolo Tonella
  • Simon Miles
  • Mark Harman
  • Michael Luck

A system built in terms of autonomous agents may require even greater correctness assurance than one which is merely reacting to the immediate control of its users. Agents make substantial decisions for themselves, so thorough testing is an important consideration. However, autonomy also makes testing harder; by their nature, autonomous agents may react in different ways to the same inputs over time, because, for instance they have changeable goals and knowledge. For this reason, we argue that testing of autonomous agents requires a procedure that caters for a wide range of test case contexts, and that can search for the most demanding of these test cases, even when they are not apparent to the agents’ developers. In this paper, we address this problem, introducing and evaluating an approach to testing autonomous agents that uses evolutionary optimization to generate demanding test cases.

AAMAS Conference 2009 Conference Paper

Norm-Based Behaviour Modification in Bdi Agents

  • Felipe Meneguzzi
  • Michael Luck

While there has been much work on developing frameworks and models of norms and normative systems, consideration of the impact of norms on the practical reasoning of agents has attracted less attention. The problem is that traditional agent architectures and their associated languages provide no mechanism to adapt an agent at runtime to norms constraining their behaviour. This is important because if BDI-type agents are to operate in open environments, they need to adapt to changes in the norms that regulate such environments. In response, in this paper we provide a technique to extend BDI agent languages, by enabling them to enact behaviour modification at runtime in response to newly accepted norms. Our solution consists of creating new plans to comply with obligations and suppressing the execution of existing plans that violate prohibitions. We demonstrate the viability of our approach through an implementation of our solution in the AgentSpeak(L) language.

AAMAS Conference 2008 Conference Paper

Case Studies for Contract-based Systems

  • Michal Jakob
  • Michal P? chou? ek
  • Simon Miles
  • Michael Luck

Of the ways in which agent behaviour can be regulated in a multiagent system, electronic contracting – based on explicit representation of different parties' responsibilities, and the agreement of all parties to them – has significant potential for modern industrial applications. Based on this assumption, the CONTRACT project aims to develop and apply electronic contracting and contract-based monitoring and verification techniques in real world applications. This paper presents results from the initial phase of the project, which focused on requirements solicitation and analysis. Specifically, we survey four use cases from diverse industrial applications, examine how they can benefit from an agent-based electronic contracting infrastructure and outline the technical requirements that would be placed on such an infrastructure. We present the designed CONTRACT architecture and describe how it may fulfil these requirements. In addition to motivating our work on the contractbased infrastructure, the paper aims to provide a much needed community resource in terms of use case themselves and to provide a clear commercial context for the development of work on contract-based system.

AAMAS Conference 2008 Conference Paper

Electronic contracting in aircraft aftercare: A case study

  • Felipe Meneguzzi
  • Simon Miles
  • Michael Luck
  • Camden Holt
  • Malcolm Smith

Distributed systems comprised of autonomous self-interested entities require some sort of control mechanism to ensure the predictability of the interactions that drive them. This is certainly true in the aerospace domain, where manufacturers, suppliers and operators must coordinate their activities to maximise safety and profit, for example. To address this need, the notion of norms has been proposed which, when incorporated into formal electronic documents, allow for the specification and deployment of contractdriven systems. In this context, we describe the CONTRACT framework and architecture for exactly this purpose, and describe a concrete instantiation of this architecture as a prototype system applied to an aerospace aftercare scenario.

AAMAS Conference 2007 Conference Paper

Modelling the Provenance of Data in Autonomous Systems

  • Simon Miles
  • STEVE MUNROE
  • Michael Luck
  • Luc Moreau

Determining the provenance of data, i. e. the process that led to that data, is vital in many disciplines. For example, in science, the process that produced a given result must be demonstrably rigorous for the result to be deemed reliable. A provenance system supports applications in recording adequate documentation about process executions to answer queries regarding provenance, and provides functionality to perform those queries. Several provenance systems are being developed, but all focus on systems in which the components are reactive, for example Web Services that act on the basis of a request, job submission system, etc. This limitation means that questions regarding the motives of autonomous actors, or agents, in such systems remain unanswerable in the general case. Such questions include: who was ultimately responsible for a given effect, what was their reason for initiating the process and does the effect of a process match what was intended to occur by those initiating the process? In this paper, we address this limitation by integrating two solutions: a generic, re-usable framework for representing the provenance of data in service-oriented architectures and a model for describing the goal-oriented delegation and engagement of agents in multi-agent systems. Using these solutions, we present algorithms to answer common questions regarding responsibility and success of a process and evaluate the approach with a simulated healthcare example.

IS Journal 2007 Journal Article

The Agents Are All Busy Doing Stuff!

  • Peter McBurney
  • Michael Luck

In answer to the question, "Where have all the agents gone? " this column asserts that agent technologies are pervasive, not missing.

KER Journal 2006 Journal Article

Crossing the agent technology chasm: Lessons, experiences and challenges in commercial applications of agents

  • STEVE MUNROE
  • Tim Miller
  • ROXANA A. BELECHEANU
  • Michal Pěchouček
  • Peter McBurney
  • Michael Luck

Agent software technologies are currently still in an early stage of market development, where, arguably, the majority of users adopting the technology are visionaries who have recognized the long-term potential of agent systems. Some current adopters also see short-term net commercial benefits from the technology, and more potential users will need to perceive such benefits if agent technologies are to become widely used. One way to assist potential adopters to assess the costs and benefits of agent technologies is through the sharing of actual deployment histories of these technologies. Working in collaboration with several companies and organizations in Europe and North America, we have studied deployed applications of agent technologies, and we present these case studies in detail in this paper. We also review the lessons learnt, and the key issues arising from the deployments, to guide decision-making in research, in development and in implementation of agent software technologies.

JAAMAS Journal 2006 Journal Article

TRAVOS: Trust and Reputation in the Context of Inaccurate Information Sources

  • W. T. Luke Teacy
  • Jigar Patel
  • Michael Luck

Abstract In many dynamic open systems, agents have to interact with one another to achieve their goals. Here, agents may be self-interested, and when trusted to perform an action for another, may betray that trust by not performing the action as required. In addition, due to the size of such systems, agents will often interact with other agents with which they have little or no past experience. There is therefore a need to develop a model of trust and reputation that will ensure good interactions among software agents in large scale open systems. Against this background, we have developed TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agent’s trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents, and when there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate.

KER Journal 2005 Journal Article

Agents in bioinformatics

  • Michael Luck
  • EMANUELA MERELLI

The scope of the Technical Forum Group (TFG) on Agents in Bioinformatics (BIOAGENTS) was to inspire collaboration between the agent and bioinformatics communities with the aim of creating an opportunity to propose a different (agent-based) approach to the development of computational frameworks both for data analysis in bioinformatics and for system modelling in computational biology. During the day, the participants examined the future of research on agents in bioinformatics primarily through 12 invited talks selected to cover the most relevant topics. From the discussions, it became clear that there are many perspectives to the field, ranging from bio-conceptual languages for agent-based simulation, to the definition of bio-ontology-based declarative languages for use by information agents, and to the use of Grid agents, each of which requires further exploration. The interactions between participants encouraged the development of applications that describe a way of creating agent-based simulation models of biological systems, starting from an hypothesis and inferring new knowledge (or relations) by mining and analysing the huge amount of public biological data. In this report we summarize and reflect on the presentations and discussions.

IJCAI Conference 2003 Conference Paper

On Identifying and Managing Relationships in Multi-Agent Systems

  • Ronald Ashri
  • Michael Luck
  • MARK D'INVERNO

Multi-agent systems result from interactions between individual agents. Through these interactions different kinds of relationships are formed, which can impact substantially on the overall system performance. However, the behaviour of agents cannot always be anticipated, especially when dealing with open and complex systems. Open agent systems must incorporate relationship management mechanisms to constrain agent actions and allow only desirable interactions. In consequence, in this paper we tackle two important issues. Firstly, in addressing management, we identify the range of different control mechanisms that are required and when they should be applied. Secondly, in addressing relationships, we present a model for identifying and characterising relationships in a manner that is application-neutral and amenable to automation.

KER Journal 2002 Journal Article

Practical and theoretical innovations in multi-agent systems research

  • MARK D'INVERNO
  • Michael Luck
  • UKMAS 2001 Contributors

1 Introduction UKMAS has now been running for six years, in 1996 and 1997 under the heading of FoMAS (Foundations of Multi-Agent Systems) both organised by Michael Luck at Warwick University and then subsequently in its current incarnation, UKMAS, first by Michael Fisher at Manchester Metropolitan University then by Chris Preist at Hewlett Packard Laboratories, Bristol and finally by Mark d'Inverno at St Catherine's College, Oxford in 2000. After the success of the workshop last year at St Catherine's in providing an excellent opportunity for academics and industrialists to come together to discuss current work and directions in the multi-agent systems field, it was decided by the steering committee to use St Catherine's once again as the venue for UKMAS 2001. The workshop was sponsored by the Engineering and Physical Sciences Research Council and by AgentLink, the European Commission's IST-funded Network of Excellence for Agent-Based Computing.

KER Journal 2001 Journal Article

Learning in multi-agent systems

  • Eduardo Alonso
  • MARK D'INVERNO
  • Daniel Kudenko
  • Michael Luck
  • JASON NOBLE

In recent years, multi-agent systems (MASs) have received increasing attention in the artificial intelligence community. Research in multi-agent systems involves the investigation of autonomous, rational and flexible behaviour of entities such as software programs or robots, and their interaction and coordination in such diverse areas as robotics (Kitano et al., 1997), information retrieval and management (Klusch, 1999), and simulation (Gilbert & Conte, 1995). When designing agent systems, it is impossible to foresee all the potential situations an agent may encounter and specify an agent behaviour optimally in advance. Agents therefore have to learn from, and adapt to, their environment, especially in a multi-agent setting.

KER Journal 2001 Journal Article

Multi-agent systems research into the 21st century

  • MARK D'INVERNO
  • Michael Luck
  • UKMAS 2001 Contributors

There is little doubt that the strength and breadth of UK research into multi-agent systems continues to grow as we move into the new millennium. In the middle of an extremely cold December in 2000, the Third UK Workshop on Multi-Agent Systems (UKMAS 2001) was held at St Catherine's College, Oxford. This was the fifth such meeting in as many years, generously sponsored by EPSRC, FIPA (The Foundation for Intelligent Physical Agents) and Hewlett Packard.

KER Journal 2000 Journal Article

Can models of agents be transferred between different areas?

  • RUTH AYLETT
  • KERSTIN DAUTENHAHN
  • Jim Doran
  • Michael Luck
  • SCOTT MOSS
  • Moshe Tennenholtz

One of the main reasons for the sustained activity and interest in the field of agent-based systems, apart from the obvious recognition of its value as a natural and intuitive way of understanding the world, is its reach into very many different and distinct fields of investigation. Indeed, the notions of agents and multi-agent systems are relevant to fields ranging from economics to robotics, in contributing to the foundations of the field, being influenced by ongoing research, and in providing many domains of application. While these various disciplines constitute a rich and diverse environment for agent research, the way in which they may have been linked by it is a much less considered issue. The purpose of this panel was to examine just this concern, in the relationships between different areas that have resulted from agent research. Informed by the experience of the participants in the areas of robotics, social simulation, economics, computer science and artificial intelligence, the discussion was lively and sometimes heated.

KER Journal 2000 Journal Article

Progress in multi-agent systems research

  • Omer Rana
  • CHRIS PREIST
  • Michael Luck

Continuing the series of workshops begun in 1996 (Luck, 1997; Doran et al., 1997; d'Inverno et al., 1997; Fisher et al., 1997) and held in each of the two years since (Luck et al., 1998; Aylett et al., 1998; Binmore et al., 1998; Decker et al., 1999; Beer et al., 1999), the 1999 workshop of the UK Special Interest Group on Multi-Agent Systems (UKMAS'99) took place in Bristol in December. Chaired and organised by Chris Preist of Hewlett Packard Laboratories, with support from both HP and BT Laboratories, the workshop brought together a diverse range of participants, from the agent community in both the UK and abroad, to discuss and present work spanning all areas of agent research. Although dominated by computer scientists, also present at the meeting were electronic engineers, computational biologists, philosophers, sociologists, statisticians, game-theorists, economists and behavioural scientists, with both academia and industry well represented. Indeed, numbers attending these workshops continue to grow, reflecting the continued and rising interest in agent-based systems. The meeting truly demonstrated the wider view of what the term “agency” implied to research in other disciplines and the questions raised at the end of presentations were a pertinent reminder of the diversity of the audience.

KER Journal 1999 Journal Article

From definition to deployment: What next for agent-based systems?

  • Michael Luck

The rapid development of the field of agent-based systems offers a new and exciting paradigm for the development of sophisticated programs in dynamic and open environments, particularly in distributed domains such as web-based systems of various kinds, and electronic commerce. However, the speed of progress has been such that it has also brought with it a new set of problems. This paper reviews the current state of research into agent-based systems, considering reasons for the way the field has grown, and pointing at the way it might continue to progress. It pays particular attention to problems with defining the nature of agents, the technologies that have enabled the rapid progress to date, and ways in which work can be consolidated through the development of large-scale applications, and the integration with theoretical foundations.

KER Journal 1999 Journal Article

Negotiation in multi-agent systems

  • Martin Beer
  • MARK D'INVERNO
  • Michael Luck
  • NICK JENNINGS
  • CHRIS PREIST
  • MICHAEL SCHROEDER

In systems composed of multiple autonomous agents, negotiation is a key form of interaction thatenables groups of agents to arrive at a mutual agreement regarding some belief, goal or plan, forexample. Particularly because the agents are autonomous and cannot be assumed to bebenevolent, agents must influence others to convince them to act in certain ways, and negotiationis thus critical for managing such inter-agent dependencies. The process of negotiation may be ofmany different forms, such as auctions, protocols in the style of the contract net, and argumentation, but it is unclear just how sophisticated the agents or the protocols for interaction must be forsuccessful negotiation in different contexts. All these issues were raised in the panel session onnegotiation.

KER Journal 1998 Journal Article

Agent Systems and Applications

  • RUTH AYLETT
  • Frances Brazier
  • NICK JENNINGS
  • Michael Luck
  • HYACINTH NWANA
  • CHRIS PREIST

As the number of deployed multi-agent applications increases, further and better experience with the technology is gained, enabling a strong evaluation of the field from a more practical perspective. In particular, questions relating to how the theory of multi-agent systems impacts on practice, and how the practical development itself compares with other technologies, can be answered in the light of a heightened level of maturity. Given the tensions between theoreticians and practitioners in computing in general, let alone their spats in AI or multi-agent systems in particular, the discussion on agent systems and applications was both vigorous and enthusiastic

KER Journal 1997 Journal Article

Formalisms for multi-agent systems

  • MARK D'INVERNO
  • Michael Fisher
  • Alessio Lomuscio
  • Michael Luck
  • Maarten de Rijke
  • Mark Ryan
  • Michael Wooldridge

As computer scientists, our goals are motivated by the desire to improve computer systems in some way: making them easier to design and implement, more robust and less prone to error, easier to use, faster, cheaper, and so on. In the field of multi-agent systems, our goal is to build systems capable of flexible autonomous decision making, with societies of such systems cooperating with one-another. There is a lot of formal theory in the area but it is often not obvious what such theories should represent and what role the theory is intended to play. Theories of agents are often abstract and obtuse and not related to concrete computational models.

KER Journal 1997 Journal Article

Foundations of multi-agent systems: issues and directions

  • Michael Luck

The last ten years have seen a marked increase of interest in agent-oriented technology, in several areas of computer science, including both software engineering and artificial intelligence. Agents are being used, and touted, for applications as diverse as personalised information management, electronic commerce, interface design, computer games, and management of complex commercial and industrial processes. Several deployed systems already use agent technology, and many more commercial and industrial applications are in development.