Arrow Research search

Author name cluster

Diego Aineto

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

12 papers
2 author rows

Possible papers

12

IJCAI Conference 2025 Conference Paper

Handling Infinite Domain Parameters in Planning Through Best-First Search with Delayed Partial Expansions

  • Ángel Aso-Mollar
  • Diego Aineto
  • Enrico Scala
  • Eva Onaindia

In automated planning, control parameters extend standard action representations through the introduction of continuous numeric decision variables. Existing state-of-the-art approaches have primarily handled control parameters as embedded constraints alongside other temporal and numeric restrictions, and thus have implicitly treated them as additional constraints rather than as decision points in the search space. In this paper, we propose an efficient alternative that explicitly handles control parameters as true decision points within a systematic search scheme. We develop a best-first, heuristic search algorithm that operates over infinite decision spaces defined by control parameters and prove a notion of completeness in the limit under certain conditions. Our algorithm leverages the concept of delayed partial expansion, where a state is not fully expanded but instead incrementally expands a subset of its successors. Our results demonstrate that this novel search algorithm is a competitive alternative to existing approaches for solving planning problems involving control parameters.

ECAI Conference 2025 Conference Paper

Improving Resilient Planning Through Landmarks and Regressed State Formulas

  • Alberto Rovetta
  • Diego Aineto
  • Alfonso Emilio Gerevini
  • Enrico Scala
  • Ivan Serina

In real-world scenarios, the successful execution of an agent’s planned actions is not always guaranteed, as actions may fail in unpredictable ways that are not explicitly modeled. To address this challenge, the concept of Resilient Planning and the RESPLAN framework were introduced focusing on the generation of k-resilient plans that enable an agent to reach its goals even in the presence of up to k execution failures. In this paper, we propose a new version of the RESPLAN planning algorithm based on two significant enhancements. The first incorporates landmarks into a pruning strategy, enabling the planner to avoid unnecessary explorations and yielding substantial performance gains, especially when no resilient plan exists. The second introduces a planning adaptation strategy exploiting regressed state formulas to support the search process during (re)planning, reducing the number of iterations required when a resilient plan does exist. We compare our methods against RESPLAN and other baselines, demonstrating substantial improvements across multiple domains.

KR Conference 2024 Conference Paper

Action Model Learning with Guarantees

  • Diego Aineto
  • Enrico Scala

This paper studies the problem of action model learning with full observability. Following the learning by search paradigm by Mitchell, we develop a theory for action model learning based on version spaces that interprets the task as search for hypotheses that are consistent with the learning samples. Our theoretical findings are instantiated in an online algorithm that maintains a compact representation of all solutions of the problem. Among this range of solutions, we bring attention to action models approximating the actual transition system from below (sound models) and from above (complete models). We show how to manipulate the output of our learning algorithm to build deterministic and non-deterministic formulations of the sound and complete models and prove that, given enough examples, both formulations converge into the very same true model. Our experiments reveal their usefulness over a range of planning domains.

ECAI Conference 2023 Conference Paper

Action-Failure Resilient Planning

  • Diego Aineto
  • Alessandro Gaudenzi
  • Alfonso Emilio Gerevini
  • Alberto Rovetta
  • Enrico Scala
  • Ivan Serina

In the real world, the execution of the actions planned for an agent is never guaranteed to succeed, as they can fail in a number of unexpected ways that are not explicitly captured in the planning model. Based on these observations, we introduce the task of finding plans for classical planning that are resilient to action execution failures. We refer to this problem as Resilient Planning and to its solutions as K-resilient plans; such plans guarantee that an agent will always be able to reach its goals (possibly by replanning alternative sequences of actions) as long as no more than K failures occur along the way. We also present RESPLAN, a new algorithm for Resilient Planning, and we compare its performance to methods based on compiling Resilient Planning to Fully-Observable-Non-Deterministic (FOND) planning.

ICAPS Conference 2023 Conference Paper

Falsification of Cyber-Physical Systems Using PDDL+ Planning

  • Diego Aineto
  • Enrico Scala
  • Eva Onaindia
  • Ivan Serina

This work explores the capabilities of current planning technologies to tackle the falsification of safety requirements for cyber-physical systems. Cyber-physical systems are systems where software and physical processes interact over time, and their requirements are commonly specified in temporal logic with time bounds. Roughly, falsification is the process of finding a trajectory of the cyber-physical system that violates the safety requirements, and it is a task typically tackled with black-box algorithms. We analyse the challenges posed by industry-driven falsification benchmarks taken from the ARCH-COMP competition, and propose a first attempt to deal with these problems through PDDL+ planning instead. Our experimental analysis on a selection of these problems provides empirical evidence on the feasibility and effectiveness of planning-based approaches, whilst also identifying the main areas of improvement.

JAIR Journal 2022 Journal Article

A Comprehensive Framework for Learning Declarative Action Models

  • Diego Aineto
  • Sergio Jiménez
  • Eva Onaindia

A declarative action model is a compact representation of the state transitions of dynamic systems that generalizes over world objects. The specification of declarative action models is often a complex hand-crafted task. In this paper we formulate declarative action models via state constraints, and present the learning of such models as a combinatorial search. The comprehensive framework presented here allows us to connect the learning of declarative action models to well-known problem solving tasks. In addition, our framework allows us to characterize the existing work in the literature according to four dimensions: (1) the target action models, in terms of the state transitions they define; (2) the available learning examples; (3) the functions used to guide the learning process, and to evaluate the quality of the learned action models; (4) the learning algorithm. Last, the paper lists relevant successful applications of the learning of declarative actions models and discusses some open challenges with the aim of encouraging future research work.

IJCAI Conference 2022 Conference Paper

Explaining the Behaviour of Hybrid Systems with PDDL+ Planning

  • Diego Aineto
  • Eva Onaindia
  • Miquel Ramirez
  • Enrico Scala
  • Ivan Serina

The aim of this work is to explain the observed behaviour of a hybrid system (HS). The explanation problem is cast as finding a trajectory of the HS that matches some observations. By using the formalism of hybrid automata (HA), we characterize the explanations as the language of a network of HA that comprises one automaton for the HS and another one for the observations, thus restricting the behaviour of the HS exclusively to trajectories that explain the observations. We observe that this problem corresponds to a reachability problem in model-checking, but that state-of-the-art model checkers struggle to find concrete trajectories. To overcome this issue we provide a formal mapping from HA to PDDL+ and show how to use an off-the-shelf automated planner. An experimental analysis over domains with piece-wise constant, linear and nonlinear dynamics reveals that the proposed PDDL+ approach is much more efficient than solving directly the explanation problem with model-checking solvers.

KR Conference 2021 Conference Paper

Generalized Temporal Inference via Planning

  • Diego Aineto
  • Sergio Jimenez
  • Eva Onaindia

This paper introduces the Temporal Inference Problem (TIP), a general formulation for a family of inference problems that reason about the past, present or future state of some observed agent. A TIP builds on the models of an actor and of an observer. Observations of the actor are gathered at arbitrary times and a TIP encodes hypothesis on unobserved segments of the actor's trajectory. Regarding the last observation as the present time, a TIP enables to hypothesize about the past trajectory, future trajectory or current state of the actor. We use LTL as a language for expressing hypotheses and reduce a TIP to a planning problem which is solved with an off-the-shelf classical planner. The output of the TIP is the most likely hypothesis, the minimal cost trajectory under the assumption that the actor is rational. Our proposal is evaluated on a wide range of TIP instances defined over different planning domains.

ICAPS Conference 2020 Conference Paper

Observation Decoding with Sensor Models: Recognition Tasks via Classical Planning

  • Diego Aineto
  • Sergio Jiménez Celorrio
  • Eva Onaindia

Observation decoding aims at discovering the underlying state trajectory of an acting agent from a sequence of observations. This task is at the core of various recognition activities that exploit planning as resolution method but there is a general lack of formal approaches that reason about the partial information received by the observer or leverage the distribution of the observations emitted by the sensors. In this paper, we formalize the observation decoding task exploiting a probabilistic sensor model to build more accurate hypothesis about the behaviour of the acting agent. Our proposal extends the expressiveness of former recognition approaches by accepting observation sequences where one observation of the sequence can represent the reading of more than one variable, thus enabling observations over actions and partially observable states simultaneously. We formulate the probability distribution of the observations perceived when the agent performs an action or visits a state as a classical cost planning task that is solved with an optimal planner. The experiments will show that exploiting a sensor model increases the accuracy of predicting the agent behaviour in four different contexts.

AIJ Journal 2019 Journal Article

Learning action models with minimal observability

  • Diego Aineto
  • Sergio Jiménez Celorrio
  • Eva Onaindia

This paper presents FAMA, a novel approach for learning Strips action models from observations of plan executions that compiles the learning task into a classical planning task. Unlike all existing learning systems, FAMA is able to learn when the actions of the plan executions are partially or totally unobservable and information on intermediate states is partially provided. This flexibility makes FAMA an ideal learning approach in domains where only sensoring data are accessible. Additionally, we leverage the compilation scheme and extend it to come up with an evaluation method that allows us to assess the quality of a learned model syntactically, that is, with respect to the actual model; and, semantically, that is, with respect to a set of observations of plan executions. We also show that the extended compilation scheme can be used to lay the foundations of a framework for action model comparison. FAMA is exhaustively evaluated over a wide range of IPC domains and its performance is compared to ARMS, a state-of-the-art benchmark in action model learning.

ICAPS Conference 2019 Conference Paper

Model Recognition as Planning

  • Diego Aineto
  • Sergio Jiménez Celorrio
  • Eva Onaindia
  • Miquel Ramírez

Given a partially observed plan execution, and a set of possible planning models (models that share the same state variables but different action schemata), model recognition is the task of identifying the model that explains the observation. The paper formalizes this task and introduces a novel method that estimates the probability of a STRIPS model to produce an observation of a plan execution. This method builds on top of off-the-shelf classical planning algorithms and it is robust to missing actions and intermediate states in the observation. The effectiveness of the method is tested in three experiments, each encoding a set of different STRIPS models and all using empty-action observations: (1) a classical string classification task; (2) identification of the model that encodes a failure present in an observation; and (3) recognition of a robot navigation policy.

ICAPS Conference 2018 Conference Paper

Learning STRIPS Action Models with Classical Planning

  • Diego Aineto
  • Sergio Jiménez Celorrio
  • Eva Onaindia

This paper presents a novel approach for learning strips action models from examples that compiles this inductive learning task into a classical planning task. Interestingly, the compilation approach is flexible to different amounts of available input knowledge; the learning examples can range from a set of plans (with their corresponding initial and final states) to just a pair of initial and final states (no intermediate action or state is given). Moreover, the compilation accepts partially specified action models and it can be used to validate whether the observation of a plan execution follows a given strips action model, even if this model is not fully specified.