Arrow Research search

Author name cluster

Odinaldo Rodrigues

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

16 papers
2 author rows

Possible papers

16

JAIR Journal 2025 Journal Article

Forgetting in Abstract Argumentation: Limits and Possibilities

  • Ringo Baumann
  • Matti Berthold
  • Dov Gabbay
  • Odinaldo Rodrigues

The topic of forgetting, which loosely speaking means losing, removing, or even hiding some variables, propositions, or formulas, has been extensively studied in the field of knowledge representation and reasoning for many major formalisms. In this article, we convey this topic to the highly active field of abstract argumentation. We provide an in-depth analysis of desirable syntactical and/or semantical properties of possible forgetting operators. In doing so, we included well-known logic programming conditions, such as strong persistence or strong invariance. Further, we argue that although abstract argumentation and logic programming are closely related, it is not possible to reduce forgetting in abstract argumentation to forgetting in logic programming in a straightforward manner. The analysis of desiderata, adapted to the specifics of abstract argumentation, includes implications among them, individual and collective satisfiability, and identifying inherent limits for a set of prominent semantics. Finally, we conduct a case study on stable semantics incorporating concrete forgetting operators.

ECAI Conference 2025 Conference Paper

Individual Consistency eXplorer (ICX): An Interactive Dashboard for the Exploration of Individual Fairness

  • Madeleine Waller
  • Odinaldo Rodrigues
  • Oana Cocarascu

We present ICX (Individual Consistency eXplorer), an interactive dashboard designed to support stakeholders in exploring individual fairness notions within algorithmic decision-making systems. ICX focuses on a set of metrics based on the consistency score, a key measure of individual fairness, by allowing the visualisation of how the classification of an individual compares with that of similar individuals. Stakeholders can define and fine-tune the notion of similarity according to domain-specific criteria, and examine individual-level views that highlight comparable individuals and their classification outcomes. ICX empowers users to interrogate, analyse and interpret fairness at the individual level, making algorithmic decision-making more transparent and accountable.

NeurIPS Conference 2025 Conference Paper

Quantifying Generalisation in Imitation Learning

  • Nathan Gavenski
  • Odinaldo Rodrigues

Imitation learning benchmarks often lack sufficient variation between training and evaluation, limiting meaningful generalisation assessment. We introduce Labyrinth, a benchmarking environment designed to test generalisation with precise control over structure, start and goal positions, and task complexity. It enables verifiably distinct training, evaluation, and test settings. Labyrinth provides a discrete, fully observable state space and known optimal actions, supporting interpretability and fine-grained evaluation. Its flexible setup allows targeted testing of generalisation factors and includes variants like partial observability, key-and-door tasks, and ice-floor hazards. By enabling controlled, reproducible experiments, Labyrinth advances the evaluation of generalisation in imitation learning and provides a valuable tool for developing more robust agents.

JAIR Journal 2024 Journal Article

Bias Mitigation Methods: Applicability, Legality, and Recommendations for Development

  • Madeleine Waller
  • Odinaldo Rodrigues
  • Michelle Seng Ah Lee
  • Oana Cocarascu

As algorithmic decision-making systems (ADMS) are increasingly deployed across various sectors, the importance of research on fairness in Artificial Intelligence (AI) continues to grow. In this paper we highlight a number of significant practical limitations and regulatory compliance issues associated with the application of existing bias mitigation methods to ADMS. We present an example of an algorithmic system used in recruitment to illustrate these limitations. Our analysis of existing methods indicates a pressing need for a change in the approach to the development of new methods. In order to address the limitations, we provide recommendations for key factors to consider in the development of new bias mitigation methods that aim to be effective in real-world scenarios and comply with legal requirements in the European Union, United Kingdom and United States, such as non-discrimination, data protection and sector-specific regulations. Further, we suggest a checklist relating to these recommendations that should be included with the development of new bias mitigation methods.

AAMAS Conference 2024 Conference Paper

Combining Theory of Mind and Abductive Reasoning in Agent-Oriented Programming

  • Nieves Montes
  • Michael Luck
  • Nardine Osman
  • Odinaldo Rodrigues
  • Carles Sierra

In this paper we present TomAbd, a novel agent model extending the BDI architecture with Theory of Mind capabilities, i. e. the capacity to adopt and reason from the perspective of others. By combining the Theory of Mind of TomAbd agents with abductive reasoning, agents can infer explanations for the behaviour of others, which they can incorporate into their own decision-making. We have implemented the TomAbd agent model and successfully tested its performance in the cooperative board game Hanabi.

ECAI Conference 2024 Conference Paper

Explorative Imitation Learning: A Path Signature Approach for Continuous Environments

  • Nathan Gavenski
  • Juarez Monteiro
  • Felipe Meneguzzi
  • Michael Luck
  • Odinaldo Rodrigues

Some imitation learning methods combine behavioural cloning with self-supervision to infer actions from state pairs. However, most rely on a large number of expert trajectories to increase generalisation and human intervention to capture key aspects of the problem, such as domain constraints. In this paper, we propose Continuous Imitation Learning from Observation (CILO), a new method augmenting imitation learning with two important features: (i) exploration, allowing for more diverse state transitions, requiring less expert trajectories and resulting in fewer training iterations; and (ii) path signatures, allowing for automatic encoding of constraints, through the creation of non-parametric representations of agents and expert trajectories. We compared CILO with a baseline and two leading imitation learning methods in five environments. It had the best overall performance of all methods in all environments, outperforming the expert in two of them.

AAAI Conference 2024 Conference Paper

Identifying Reasons for Bias: An Argumentation-Based Approach

  • Madeleine Waller
  • Odinaldo Rodrigues
  • Oana Cocarascu

As algorithmic decision-making systems become more prevalent in society, ensuring the fairness of these systems is becoming increasingly important. Whilst there has been substantial research in building fair algorithmic decision-making systems, the majority of these methods require access to the training data, including personal characteristics, and are not transparent regarding which individuals are classified unfairly. In this paper, we propose a novel model-agnostic argumentation-based method to determine why an individual is classified differently in comparison to similar individuals. Our method uses a quantitative argumentation framework to represent attribute-value pairs of an individual and of those similar to them, and uses a well-known semantics to identify the attribute-value pairs in the individual contributing most to their different classification. We evaluate our method on two datasets commonly used in the fairness literature and illustrate its effectiveness in the identification of bias.

AAMAS Conference 2024 Conference Paper

Imitation Learning Datasets: A Toolkit For Creating Datasets, Training Agents and Benchmarking

  • Nathan Gavenski
  • Michael Luck
  • Odinaldo Rodrigues

Imitation learning field requires expert data to train agents in a task. Most often, this learning approach suffers from the absence of available data, which results in techniques being tested on its dataset. Creating datasets is a cumbersome process requiring researchers to train expert agents from scratch, record their interactions and test each benchmark method with newly created data. Moreover, creating new datasets for each new technique results in a lack of consistency in the evaluation process since each dataset can drastically vary in state and action distribution. In response, this work aims to address these issues by creating Imitation Learning Datasets, a toolkit that allows for: (i) curated expert policies with multithreaded support for faster dataset creation; (ii) readily available datasets and techniques with precise measurements; and (iii) sharing implementations of common imitation learning techniques. Demonstration link: https: //nathangavenski. github. io/#/il-datasets-video

NeSy Conference 2022 Conference Paper

From Subsymbolic to Symbolic: A Blueprint for Investigation

  • Joseph Pober
  • Michael Luck
  • Odinaldo Rodrigues

In this paper, we sketch a framework for integration between subsymbolic and symbolic representations, consisting of a series of layers and mappings between elements across the layers. Each layer corresponds to a particular level of abstraction about phenomena in the environment being observed in the layers below. Through an iterative process, the differences between the elements in successive iterations within a given layer are captured as transformations between the elements and used for identification and recognition of objects as well as prediction and verification of the environment in future iterations. A bridge between the subsymbolic and symbolic levels can be built by successively adding layers at ever more sophisticated levels of abstraction. This approach aims to benefit from subsymbolic learning, while harnessing the abstraction and reasoning powers of classical symbolic AI techniques.

AAAI Conference 2020 Conference Paper

Forgetting an Argument

  • Ringo Baumann
  • Dov Gabbay
  • Odinaldo Rodrigues

The notion of forgetting, as considered in the famous paper by Lin and Reiter in 1994 has been extensively studied in classical logic and more recently, in non-monotonic formalisms like logic programming. In this paper, we convey the idea of forgetting to another major AI formalism, namely Dungstyle argumentation frameworks. Our approach is axiomaticdriven and not limited to any specific semantics: we propose semantical and syntactical desiderata encoding different criteria for what forgetting an argument might mean; analyze how these criteria relate to each other; and check whether the criteria can be satisfied in general. The analysis is done for a number of widely used argumentation semantics. Our investigation shows that almost all desiderata are individually satis- fiable. However, combinations of semantical and/or syntactical conditions reveal a much more interesting landscape. For instance, we found that the ad hoc approach to forgetting an argument, i. e. , by the syntactical removal of the argument and all of its associated attacks, is too restrictive and only compatible with the two weakest semantical desiderata. Amongst the several interesting combinations identified, we showed that one satisfies a notion of minimal change and presented an algorithm that given an AF F and argument x, constructs a suitable AF G satisfying the conditions in the combination.

AAMAS Conference 2016 Conference Paper

Estimating Second-Order Arguments in Dialogical Settings (Extended Abstract)

  • Seyed Ali Hosseini
  • Sanjay Modgil
  • Odinaldo Rodrigues

This paper proposes mechanisms for agents to model other agents’ arguments, so that modelling agents can anticipate the likelihood that their interlocutors can constructs arguments in dialogues. In contrast with existing works on “opponent modelling” which treat arguments as abstract entities, the likelihood that an agent can construct an argument is derived from the likelihoods that it possesses the beliefs required to construct the argument. We therefore also address how a modeller can quantify the certainty that its interlocutor possesses beliefs based on previous dialogues, and membership of interlocutors in communities.

FLAP Journal 2016 Journal Article

Introducing Bayesian Argumentation Networks.

  • Dov M. Gabbay
  • Odinaldo Rodrigues

We give a faithful interpretation of Bayesian networks into a version of nu- merical argumentation networks based on Łukasiewicz infinite-valued logic with product conjunction. The advantages of such a translation, beyond the theoret- ical aspects of it, are hopefully threefold: 1) importing updating algorithms into argumentation networks; 2) importing the handling of loops into cyclic Bayesian networks; and 3) importing logical proof theory into Bayesian networks.

AAMAS Conference 2016 Conference Paper

Prioritised Default Logic as Rational Argumentation

  • Anthony P. Young
  • Sanjay Modgil
  • Odinaldo Rodrigues

We endow Brewka’s prioritised default logic (PDL) with argumentation semantics using the ASPIC+ framework for structured argumentation. We prove that the conclusions of the justified arguments correspond to the prioritised default extensions in a normatively rational manner. Argumentation semantics for PDL will allow for the application of argument game proof theories to the process of inference in PDL, making the reasons for accepting a conclusion transparent and the inference process more intuitive. This also opens up the possibility for argumentation-based distributed reasoning and communication amongst agents with PDL representations of mental attitudes. General Terms Theory

TARK Conference 1996 Conference Paper

Counterfactuals and Updates as Inverse Modalities

  • Mark Ryan 0001
  • Pierre-Yves Schobbens
  • Odinaldo Rodrigues

We point out a simple but hitherto ignored link between the theory of updates and counterfactuals and classical modal logic: update is a classical existential modality, counterfactual is a classical universal modality, and the link between the two (called the Ramsey rule) is simply the link between two inverse accessibility relations of a classical Kripke model.