Arrow Research search

Author name cluster

Jim Duggan

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

7 papers
1 author row

Possible papers

7

TAAS Journal 2023 Journal Article

A Genetic Programming-based Framework for Semi-automated Multi-agent Systems Engineering

  • Nicola Mc Donnell
  • Jim Duggan
  • Enda Howley

With the rise of new technologies, such as Edge computing, Internet of Things, Smart Cities, and Smart Grids, there is a growing need for multi-agent systems (MAS) approaches. Designing multi-agent systems is challenging, and doing this in an automated way is even more so. To address this, we propose a new framework, Evolved Gossip Contracts (EGC). It builds on Gossip Contracts (GC), a decentralised cooperation protocol that is used as the communication mechanism to facilitate self-organisation in a cooperative MAS. GC has several methods that are implemented uniquely, depending on the goal the MAS aims to achieve. The EGC framework uses evolutionary computing to search for the best implementation of these methods. To evaluate EGC, it was used to solve a classical NP-hard optimisation problem, the Bin Packing Problem (BPP). The experimental results show that EGC successfully discovered a decentralised strategy to solve the BPP, which is better than two classical heuristics on test cases similar to those on which it was trained; the improvement is statistically significant. EGC is the first framework that leverages evolutionary computing to semi-automate the discovery of a communication protocol for a MAS that has been shown to be effective at solving an NP-hard problem.

KER Journal 2018 Journal Article

Reward shaping for knowledge-based multi-objective multi-agent reinforcement learning

  • Patrick Mannion
  • Sam Devlin
  • Jim Duggan
  • Enda Howley

Abstract The majority of multi-agent reinforcement learning (MARL) implementations aim to optimize systems with respect to a single objective, despite the fact that many real-world problems are inherently multi-objective in nature. Research into multi-objective MARL is still in its infancy, and few studies to date have dealt with the issue of credit assignment. Reward shaping has been proposed as a means to address the credit assignment problem in single-objective MARL, however it has been shown to alter the intended goals of a domain if misused, leading to unintended behaviour. Two popular shaping methods are potential-based reward shaping and difference rewards, and both have been repeatedly shown to improve learning speed and the quality of joint policies learned by agents in single-objective MARL domains. This work discusses the theoretical implications of applying these shaping approaches to cooperative multi-objective MARL problems, and evaluates their efficacy using two benchmark domains. Our results constitute the first empirical evidence that agents using these shaping methodologies can sample true Pareto optimal solutions in cooperative multi-objective stochastic games.

AAMAS Conference 2017 Conference Paper

A Theoretical and Empirical Analysis of Reward Transformations in Multi-Objective Stochastic Games

  • Patrick Mannion
  • Jim Duggan
  • Enda Howley

Reward shaping has been proposed as a means to address the credit assignment problem in Multi-Agent Systems (MAS). Two popular shaping methods are Potential-Based Reward Shaping and difference rewards, and both have been shown to improve learning speed and the quality of joint policies learned by agents in single-objective MAS. In this work we discuss the theoretical implications of applying these approaches to multi-objective MAS, and evaluate their efficacy using a new multi-objective benchmark domain where the true set of Pareto optimal system utilities is known.

KER Journal 2017 Journal Article

Multi-agent credit assignment in stochastic resource management games

  • Patrick Mannion
  • Sam Devlin
  • Jim Duggan
  • Enda Howley

Abstract Multi-agent systems (MASs) are a form of distributed intelligence, where multiple autonomous agents act in a common environment. Numerous complex, real world systems have been successfully optimized using multi-agent reinforcement learning (MARL) in conjunction with the MAS framework. In MARL agents learn by maximizing a scalar reward signal from the environment, and thus the design of the reward function directly affects the policies learned. In this work, we address the issue of appropriate multi-agent credit assignment in stochastic resource management games. We propose two new stochastic games to serve as testbeds for MARL research into resource management problems: the tragic commons domain and the shepherd problem domain. Our empirical work evaluates the performance of two commonly used reward shaping techniques: potential-based reward shaping and difference rewards. Experimental results demonstrate that systems using appropriate reward shaping techniques for multi-agent credit assignment can achieve near-optimal performance in stochastic resource management games, outperforming systems learning using unshaped local or global evaluations. We also present the first empirical investigations into the effect of expressing the same heuristic knowledge in state- or action-based formats, therefore developing insights into the design of multi-agent potential functions that will inform future work.

AAMAS Conference 2016 Conference Paper

Multi-Objective Dynamic Dispatch Optimisation Using Multi-Agent Reinforcement Learning (Extended Abstract)

  • Patrick Mannion
  • Karl Mason
  • Sam Devlin
  • Jim Duggan
  • Enda Howley

In this paper, we examine the application of Multi-Agent Reinforcement Learning (MARL) to a Dynamic Economic Emissions Dispatch problem. This is a multi-objective problem domain, where the conflicting objectives of fuel cost and emissions must be minimised. We evaluate the performance of several different MARL credit assignment structures in this domain, and our experimental results show that MARL can produce comparable solutions to those computed by Genetic Algorithms and Particle Swarm Optimisation.

EWRL Workshop 2015 Workshop Paper

Parallel Reinforcement Learning with State Action Space Partitioning

  • Patrick Mannion
  • Jim Duggan
  • Enda Howley

Parallel Reinforcement Learning (PRL) is an emerging paradigm within Reinforcement Learning (RL) literature, where multiple agents share their experiences while learning in parallel on separate instances of a problem. Here we propose a novel variant of PRL with State Action Space Partitioning (SASP). PRL agents are each assigned to a specific region of the state action space of a problem, with the goal of increasing exploration and improving learning speed. We evaluate our proposed approach on a realistic traffic signal control problem, and prove experimentally that it offers significant performance improvements over a PRL algorithm without SASP.

AAMAS Conference 2011 Conference Paper

Tag-Based Cooperation in N-Player Dilemmas

  • Enda Howley
  • Jim Duggan

This paper studies the emergence of cooperation in the N-Player Prisoner's Dilemma (NPD) using a tag-mediated interaction model. Tags have been widely used to bias agent pairwise interactions which facilitates the emergence of cooperation. This paper shows some of the key parameters that influence the emergence of cooperation in an evolutionary setting. The aim of this paper is to demonstrate the most vital factors that are commonly ignored in many existing NPD studies.