Arrow Research search

Author name cluster

Radu Grosu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

26 papers
2 author rows

Possible papers

26

NeurIPS Conference 2025 Conference Paper

Parallelization of Non-linear State-Space Models: Scaling Up Liquid-Resistance Liquid-Capacitance Networks for Efficient Sequence Modeling

  • Mónika Farsang
  • Radu Grosu

We present LrcSSM, a $\textit{non-linear}$ recurrent model that processes long sequences as fast as today's linear state-space layers. By forcing its Jacobian matrix to be diagonal, the full sequence can be solved in parallel, giving $\mathcal{O}(TD)$ computational work and memory and only $\mathcal{O}(\log T)$ sequential depth, for input-sequence length $T$ and a state dimension $D$. Moreover, LrcSSM offers a formal gradient-stability guarantee that other input-varying systems such as Liquid-S4 and Mamba do not provide. Importantly, the diagonal Jacobian structure of our model results in no performance loss compared to the original model with dense Jacobian, and the approach can be generalized to other non-linear recurrent models, demonstrating broader applicability. On a suite of long-range forecasting tasks, we demonstrate that LrcSSM outperforms Transformers, LRU, S5, and Mamba.

AAAI Conference 2025 Conference Paper

Real-Time Recurrent Reinforcement Learning

  • Julian Lemmel
  • Radu Grosu

We introduce a biologically plausible RL framework for solving tasks in partially observable Markov decision processes (POMDPs). The proposed algorithm combines three integral parts: (1) A Meta-RL architecture, resembling the mammalian basal ganglia; (2) A biologically plausible reinforcement learning algorithm, exploiting temporal difference learning and eligibility traces to train the policy and the value-function; (3) An online automatic differentiation algorithm for computing the gradients with respect to parameters of a shared recurrent network backbone. Our experimental results show that the method is capable of solving a diverse set of partially observable reinforcement learning tasks. The algorithm we call real-time recurrent reinforcement learning (RTRRL) serves as a model of learning in biological neural networks, mimicking reward pathways in the basal ganglia.

AAMAS Conference 2025 Conference Paper

Scalable Offline Reinforcement Learning for Mean Field Games

  • Axel Brunnbauer
  • Julian Lemmel
  • Zahra Babaiee
  • Sophie A. Neubauer
  • Radu Grosu

Reinforcement learning (RL) algorithms for mean-field games offer a scalable framework for optimizing policies in large populations of interacting agents. Existing methods often depend on online interactions or assume access to system dynamics, limiting their practicality in real-world scenarios where such interactions are infeasible or difficult to model. In this paper, we present Offline Munchausen Mirror Descent (Off-MMD), a novel mean-field RL algorithm that approximates equilibrium policies in mean-field games using purely offline data. By leveraging iterative mirror descent and importance sampling, Off-MMD estimates the mean-field distribution from static datasets without relying on simulation or environment dynamics. Additionally, we incorporate techniques from offline RL to address common issues like Q-value overestimation, ensuring robust policy learning even with limited data coverage. Our algorithm scales to complex environments and demonstrates strong performance on benchmark tasks like crowd exploration or navigation, highlighting its applicability to real-world multi-agent systems where online experimentation is infeasible. We empirically demonstrate the robustness of Off-MMD to low-quality datasets and conduct experiments to investigate its sensitivity to hyperparameter choices.

ICRA Conference 2025 Conference Paper

Scenario-Based Curriculum Generation for Multi-Agent Driving

  • Axel Brunnbauer
  • Luigi Berducci
  • Peter Priller
  • Dejan Nickovic
  • Radu Grosu

The automated generation of diversified training scenarios has been an important ingredient in many complex learning tasks, especially in real-world application domains such as autonomous driving, where auto-curriculum generation is considered vital for obtaining robust and general policies. However, crafting traffic scenarios with multiple, heterogeneous agents is typically considered a tedious and time-consuming task, especially in more complex simulation environments. To this end, we introduce MATS-Gym, a multi-agent training framework for autonomous driving that uses partial-scenario specifications to generate traffic scenarios with a variable number of agents which are executed in CARLA, a high-fidelity driving simulator. MATS-Gym reconciles scenario execution engines, such as Scenic and ScenarioRunner, with established multi-agent training frameworks where the interaction between the environment and the agents is modeled as a partially observable stochastic game. Furthermore, we integrate MATSGym with techniques from unsupervised environment design to automate the generation of adaptive auto-curricula, which is the first application of such algorithms to the domain of autonomous driving. The code is available at https://github.com/AutonomousDrivingExaminer/mats-gym.

AAAI Conference 2025 Conference Paper

The Master Key Filters Hypothesis: Deep Filters Are General

  • Zahra Babaiee
  • Peyman M. Kiasari
  • Daniela Rus
  • Radu Grosu

This paper challenges the prevailing view that convolutional neural network (CNN) filters become increasingly specialized in deeper layers. Motivated by recent observations of clusterable repeating patterns in depthwise separable CNNs (DS-CNNs) trained on ImageNet, we extend this investigation across various domains and datasets. Our analysis of DS-CNNs reveals that deep filters maintain generality, contradicting the expected transition to class-specific features. We demonstrate the generalizability of these filters through transfer learning experiments, showing that frozen filters from models trained on different datasets perform well and can be further improved when sourced from larger, better-performing models. Our findings indicate that spatial features learned by depthwise separable convolutions remain generic across all layers, domains, and architectures. This research provides new insights into the nature of generalization in neural networks, particularly in DS-CNNs, and has significant implications for transfer learning and model design.

NeurIPS Conference 2025 Conference Paper

The Quest for Universal Master Key Filters in DS-CNNs

  • Zahra Babaiee
  • Peyman M. Kiasari
  • Daniela Rus
  • Radu Grosu

A recent study has proposed the ``Master Key Filters Hypothesis" for convolutional neural network filters. This paper extends this hypothesis by radically constraining its scope to a single set of just 8 universal filters that depthwise separable convolutional networks inherently converge to. While conventional DS-CNNs employ thousands of distinct trained filters, our analysis reveals these filters are predominantly linear shifts (ax+b) of our discovered universal set. Through systematic unsupervised search, we extracted these fundamental patterns across different architectures and datasets. Remarkably, networks initialized with these 8 unique frozen filters achieve over 80\% ImageNet accuracy, and even outperform models with thousands of trainable parameters when applied to smaller datasets. The identified master key filters closely match Difference of Gaussians (DoGs), Gaussians, and their derivatives, structures that are not only fundamental to classical image processing but also strikingly similar to receptive fields in mammalian visual systems. Our findings provide compelling evidence that depthwise convolutional layers naturally gravitate toward this fundamental set of spatial operators regardless of task or architecture. This work offers new insights for understanding generalization and transfer learning through the universal language of these master key filters.

ICML Conference 2025 Conference Paper

Visual Graph Arena: Evaluating Visual Conceptualization of Vision and Multimodal Large Language Models

  • Zahra Babaiee
  • Peyman M. Kiasari
  • Daniela Rus
  • Radu Grosu

Recent advancements in multimodal large language models have driven breakthroughs in visual question answering. Yet, a critical gap persists, ‘conceptualization’—the ability to recognize and reason about the same concept despite variations in visual form, a basic ability of human reasoning. To address this challenge, we introduce the Visual Graph Arena (VGA), a dataset featuring six graph-based tasks designed to evaluate and improve AI systems’ capacity for visual abstraction. VGA uses diverse graph layouts (e. g. , Kamada-Kawai vs. planar) to test reasoning independent of visual form. Experiments with state-of-the-art vision models and multimodal LLMs reveal a striking divide: humans achieved near-perfect accuracy across tasks, while models totally failed on isomorphism detection and showed limited success in path/cycle tasks. We further identify behavioral anomalies suggesting pseudo-intelligent pattern matching rather than genuine understanding. These findings underscore fundamental limitations in current AI models for visual understanding. By isolating the challenge of representation-invariant reasoning, the VGA provides a framework to drive progress toward human-like conceptualization in AI visual models. The Visual Graph Arena is available at: https: //vga. csail. mit. edu/.

ICRA Conference 2024 Conference Paper

Flock-Formation Control of Multi-Agent Systems using Imperfect Relative Distance Measurements

  • Andreas Brandstätter
  • Scott A. Smolka
  • Scott D. Stoller
  • Ashish Tiwari 0001
  • Radu Grosu

We present distributed distance-based control (DDC), a novel approach for controlling a multi-agent system, such that it achieves a desired formation, in a resource-constrained setting. Our controller is fully distributed and only requires local state-estimation and scalar measurements of inter-agent distances. It does not require an external localization system or inter-agent exchange of state information. Our approach uses spatial-predictive control (SPC), to optimize a cost function given strictly in terms of inter-agent distances and the distance to the target location. In DDC, each agent continuously learns and updates a very abstract model of the actual system, in the form of a dictionary of three independent key-value pairs $(\Delta \vec s, \Delta d)$, where ∆d is the partial derivative of the distance measurements along a spatial direction $\Delta \vec s$. This is sufficient for an agent to choose the best next action. We validate our approach by using DDC to control a collection of Crazyflie drones to achieve formation flight and reach a target while maintaining flock formation.

ICRA Conference 2024 Conference Paper

Learning Adaptive Safety for Multi-Agent Systems

  • Luigi Berducci
  • Shuo Yang 0007
  • Rahul Mangharam
  • Radu Grosu

Ensuring safety in dynamic multi-agent systems is challenging due to limited information about the other agents. Control Barrier Functions (CBFs) are showing promise for safety assurance but current methods make strong assumptions about other agents and often rely on manual tuning to balance safety, feasibility, and performance. In this work, we delve into the problem of adaptive safe learning for multi-agent systems with CBF. We show how emergent behaviour can be profoundly influenced by the CBF configuration, highlighting the necessity for a responsive and dynamic approach to CBF design. We present ASRL, a novel adaptive safe RL framework, to fully automate the optimization of policy and CBF coefficients, to enhance safety and long-term performance through reinforcement learning. By directly interacting with the other agents, ASRL learns to cope with diverse agent behaviours and maintains the cost violations below a desired limit. We evaluate ASRL in a multi-robot system and competitive multi-agent racing, against learning-based and control-theoretic approaches. We empirically demonstrate the efficacy of ASRL, and assess generalization and scalability to out-of-distribution scenarios.

ICRA Conference 2024 Conference Paper

Learning with Chemical versus Electrical Synapses Does it Make a Difference?

  • Mónika Farsang
  • Mathias Lechner
  • David Lung
  • Ramin M. Hasani
  • Daniela Rus
  • Radu Grosu

Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems. Bio-electrical synapses directly transmit neural signals, by enabling fast current flow between neurons. In contrast, bio-chemical synapses transmit neural signals indirectly, through neurotransmitters. Prior work showed that interpretable dynamics for complex robotic control, can be achieved by using chemical synapses, within a sparse, bio-inspired architecture, called Neural Circuit Policies (NCPs). However, a comparison of these two synaptic models, within the same architecture, remains an unexplored area. In this work we aim to determine the impact of using chemical synapses compared to electrical synapses, in both sparse and all-to-all connected networks. We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions and in the presence of noise. The experiments highlight the substantial influence of the architectural and synaptic-model choices, respectively. Our results show that employing chemical synapses yields noticeable improvements compared to electrical synapses, and that NCPs lead to better results in both synaptic models.

ICLR Conference 2024 Conference Paper

Unveiling the Unseen: Identifiable Clusters in Trained Depthwise Convolutional Kernels

  • Zahra Babaiee
  • Peyman M. Kiasari
  • Daniela Rus
  • Radu Grosu

Recent advances in depthwise-separable convolutional neural networks (DS-CNNs) have led to novel architectures, that surpass the performance of classical CNNs, by a considerable scalability and accuracy margin. This paper reveals another striking property of DS-CNN architectures: discernible and explainable patterns emerge in their trained depthwise convolutional kernels in all layers. Through an extensive analysis of millions of trained filters, with different sizes and from various models, we employed unsupervised clustering with autoencoders, to categorize these filters. Astonishingly, the patterns converged into a few main clusters, each resembling the difference of Gaussian (DoG) functions, and their first and second-order derivatives. Notably, we classify over 95\% and 90\% of the filters from state-of-the-art ConvNeXtV2 and ConvNeXt models, respectively. This finding is not merely a technological curiosity; it echoes the foundational models neuroscientists have long proposed for the vision systems of mammals. Our results thus deepen our understanding of the emergent properties of trained DS-CNNs and provide a bridge between artificial and biological visual processing systems. More broadly, they pave the way for more interpretable and biologically-inspired neural network designs in the future.

ICRA Conference 2023 Conference Paper

Multi-Agent Spatial Predictive Control with Application to Drone Flocking

  • Andreas Brandstätter
  • Scott A. Smolka
  • Scott D. Stoller
  • Ashish Tiwari 0001
  • Radu Grosu

We introduce Spatial Predictive Control (SPC), a technique for solving the following problem: given a collection of robotic agents with black-box positional low-level controllers (PLLCs) and a mission-specific distributed cost function, how can a distributed controller achieve and maintain cost-function minimization without a plant model and only positional observations of the environment? Our fully distributed SPC controller is based strictly on the position of the agent itself and on those of its neighboring agents. This information is used in every time step to compute the gradient of the cost function and to perform a spatial look-ahead to predict the best next target position for the PLLC. Using a simulation environment, we show that SPC outperforms Potential Field Controllers, a related class of controllers, on the drone flocking problem. We also show that SPC works on real hardware, and is therefore able to cope with the potential sim-to-real transfer gap. We demonstrate its performance using as many as 16 Crazyflie 2. 1 drones in a number of scenarios, including obstacle avoidance.

AAAI Conference 2022 Conference Paper

GoTube: Scalable Statistical Verification of Continuous-Depth Models

  • Sophie A. Gruenbacher
  • Mathias Lechner
  • Ramin Hasani
  • Daniela Rus
  • Thomas A. Henzinger
  • Scott A. Smolka
  • Radu Grosu

We introduce a new statistical verification algorithm that formally quantifies the behavioral robustness of any timecontinuous process formulated as a continuous-depth model. Our algorithm solves a set of global optimization (Go) problems over a given time horizon to construct a tight enclosure (Tube) of the set of all process executions starting from a ball of initial states. We call our algorithm GoTube. Through its construction, GoTube ensures that the bounding tube is conservative up to a desired probability and up to a desired tightness. GoTube is implemented in JAX and optimized to scale to complex continuous-depth neural network models. Compared to advanced reachability analysis tools for timecontinuous neural networks, GoTube does not accumulate overapproximation errors between time steps and avoids the infamous wrapping effect inherent in symbolic techniques. We show that GoTube substantially outperforms state-of-theart verification tools in terms of the size of the initial ball, speed, time-horizon, task completion, and scalability on a large set of experiments. GoTube is stable and sets the stateof-the-art in terms of its ability to scale to time horizons well beyond what has been previously possible.

ICRA Conference 2022 Conference Paper

Latent Imagination Facilitates Zero-Shot Transfer in Autonomous Racing

  • Axel Brunnbauer
  • Luigi Berducci
  • Andreas Brandstätter
  • Mathias Lechner
  • Ramin M. Hasani
  • Daniela Rus
  • Radu Grosu

World models learn behaviors in a latent imagination space to enhance the sample-efficiency of deep reinforcement learning (RL) algorithms. While learning world models for high-dimensional observations (e. g. , pixel inputs) has become practicable on standard RL benchmarks and some games, their effectiveness in real-world robotics applications has not been explored. In this paper, we investigate how such agents generalize to real-world autonomous vehicle control tasks, where advanced model-free deep RL algorithms fail. In particular, we set up a series of time-lap tasks for an F1TENTH racing robot, equipped with a high-dimensional LiDAR sensor, on a set of test tracks with a gradual increase in their complexity. In this continuous-control setting, we show that model-based agents capable of learning in imagination substantially outperform model-free agents with respect to performance, sample efficiency, successful task completion, and generalization. Moreover, we show that the generalization ability of model-based agents strongly depends on the choice of their observation model. We provide extensive empirical evidence for the effectiveness of world models provided with long enough memory horizons in sim2real tasks.

ICRA Conference 2021 Conference Paper

Adversarial Training is Not Ready for Robot Learning

  • Mathias Lechner
  • Ramin M. Hasani
  • Radu Grosu
  • Daniela Rus
  • Thomas A. Henzinger

Adversarial training is an effective method to train deep learning models that are resilient to norm-bounded perturbations, with the cost of nominal performance drop. While adversarial training appears to enhance the robustness and safety of a deep model deployed in open-world decision-critical applications, counterintuitively, it induces undesired behaviors in robot learning settings. In this paper, we show theoretically and experimentally that neural controllers obtained via adversarial training are subjected to three types of defects, namely transient, systematic, and conditional errors. We first generalize adversarial training to a safety-domain optimization scheme allowing for more generic specifications. We then prove that such a learning process tends to cause certain error profiles. We support our theoretical results by a thorough experimental safety analysis in a robot-learning task. Our results suggest that adversarial training is not yet ready for robot learning.

AAAI Conference 2021 Conference Paper

Liquid Time-constant Networks

  • Ramin Hasani
  • Mathias Lechner
  • Alexander Amini
  • Daniela Rus
  • Radu Grosu

We introduce a new class of time-continuous recurrent neural network models. Instead of declaring a learning system’s dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems modulated via nonlinear interlinked gates. The resulting models represent dynamical systems with varying (i. e. , liquid) time-constants coupled to their hidden state, with outputs being computed by numerical differential equation solvers. These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations, and give rise to improved performance on time-series prediction tasks. To demonstrate these properties, we first take a theoretical approach to find bounds over their dynamics, and compute their expressive power by the trajectory length measure in a latent trajectory space. We then conduct a series of time-series prediction experiments to manifest the approximation capability of Liquid Time-Constant Networks (LTCs) compared to classical and modern RNNs.

AAAI Conference 2021 Conference Paper

On the Verification of Neural ODEs with Stochastic Guarantees

  • Sophie Grunbacher
  • Ramin Hasani
  • Mathias Lechner
  • Jacek Cyranka
  • Scott A. Smolka
  • Radu Grosu

We show that Neural ODEs, an emerging class of timecontinuous neural networks, can be verified by solving a set of global-optimization problems. For this purpose, we introduce Stochastic Lagrangian Reachability (SLR), an abstraction-based technique for constructing a tight Reachtube (an over-approximation of the set of reachable states over a given time-horizon), and provide stochastic guarantees in the form of confidence intervals for the Reachtube bounds. SLR inherently avoids the infamous wrapping effect (accumulation of over-approximation errors) by performing local optimization steps to expand safe regions instead of repeatedly forward-propagating them as is done by deterministic reachability methods. To enable fast local optimizations, we introduce a novel forward-mode adjoint sensitivity method to compute gradients without the need for backpropagation. Finally, we establish asymptotic and non-asymptotic convergence rates for SLR.

ICML Conference 2021 Conference Paper

On-Off Center-Surround Receptive Fields for Accurate and Robust Image Classification

  • Zahra Babaiee
  • Ramin M. Hasani
  • Mathias Lechner
  • Daniela Rus
  • Radu Grosu

Robustness to variations in lighting conditions is a key objective for any deep vision system. To this end, our paper extends the receptive field of convolutional neural networks with two residual components, ubiquitous in the visual processing system of vertebrates: On-center and off-center pathways, with an excitatory center and inhibitory surround; OOCS for short. The On-center pathway is excited by the presence of a light stimulus in its center, but not in its surround, whereas the Off-center pathway is excited by the absence of a light stimulus in its center, but not in its surround. We design OOCS pathways via a difference of Gaussians, with their variance computed analytically from the size of the receptive fields. OOCS pathways complement each other in their response to light stimuli, ensuring this way a strong edge-detection capability, and as a result an accurate and robust inference under challenging lighting conditions. We provide extensive empirical evidence showing that networks supplied with OOCS pathways gain accuracy and illumination-robustness from the novel edge representation, compared to other baselines.

ICML Conference 2020 Conference Paper

A Natural Lottery Ticket Winner: Reinforcement Learning with Ordinary Neural Circuits

  • Ramin M. Hasani
  • Mathias Lechner
  • Alexander Amini
  • Daniela Rus
  • Radu Grosu

We propose a neural information processing system obtained by re-purposing the function of a biological neural circuit model to govern simulated and real-world control tasks. Inspired by the structure of the nervous system of the soil-worm, C. elegans, we introduce ordinary neural circuits (ONCs), defined as the model of biological neural circuits reparameterized for the control of alternative tasks. We first demonstrate that ONCs realize networks with higher maximum flow compared to arbitrary wired networks. We then learn instances of ONCs to control a series of robotic tasks, including the autonomous parking of a real-world rover robot. For reconfiguration of the purpose of the neural circuit, we adopt a search-based optimization algorithm. Ordinary neural circuits perform on par and, in some cases, significantly surpass the performance of contemporary deep learning models. ONC networks are compact, 77% sparser than their counterpart neural controllers, and their neural dynamics are fully interpretable at the cell-level.

ICRA Conference 2020 Conference Paper

Gershgorin Loss Stabilizes the Recurrent Neural Network Compartment of an End-to-end Robot Learning Scheme

  • Mathias Lechner
  • Ramin M. Hasani
  • Daniela Rus
  • Radu Grosu

Traditional robotic control suits require profound task-specific knowledge for designing, building and testing control software. The rise of Deep Learning has enabled end-to-end solutions to be learned entirely from data, requiring minimal knowledge about the application area. We design a learning scheme to train end-to-end linear dynamical systems (LDS)s by gradient descent in imitation learning robotic domains. We introduce a new regularization loss component together with a learning algorithm that improves the stability of the learned autonomous system, by forcing the eigenvalues of the internal state updates of an LDS to be negative reals. We evaluate our approach on a series of real-life and simulated robotic experiments, in comparison to linear and nonlinear Recurrent Neural Network (RNN) architectures. Our results show that our stabilizing method significantly improves test performance of LDS, enabling such linear models to match the performance of contemporary nonlinear RNN architectures. A video of the obstacle avoidance performance of our method on a mobile robot, in unseen environments, compared to other methods can be viewed at https://youtu.be/mhEsCoNao5E.

ICRA Conference 2019 Conference Paper

Designing Worm-inspired Neural Networks for Interpretable Robotic Control

  • Mathias Lechner
  • Ramin M. Hasani
  • Manuel Zimmer
  • Thomas A. Henzinger
  • Radu Grosu

In this paper, we design novel liquid time-constant recurrent neural networks for robotic control, inspired by the brain of the nematode, C. elegans. In the worm's nervous system, neurons communicate through nonlinear time-varying synaptic links established amongst them by their particular wiring structure. This property enables neurons to express liquid time-constants dynamics and therefore allows the network to originate complex behaviors with a small number of neurons. We identify neuron-pair communication motifs as design operators and use them to configure compact neuronal network structures to govern sequential robotic tasks. The networks are systematically designed to map the environmental observations to motor actions, by their hierarchical topology from sensory neurons, through recurrently-wired interneurons, to motor neurons. The networks are then parametrized in a supervised-learning scheme by a search-based algorithm. We demonstrate that obtained networks realize interpretable dynamics. We evaluate their performance in controlling mobile and arm robots, and compare their attributes to other artificial neural network-based control agents. Finally, we experimentally show their superior resilience to environmental noise, compared to the existing machine learning-based methods.

NeurIPS Conference 2018 Conference Paper

Dynamic Network Model from Partial Observations

  • Elahe Ghalebi
  • Baharan Mirzasoleiman
  • Radu Grosu
  • Jure Leskovec

Can evolving networks be inferred and modeled without directly observing their nodes and edges? In many applications, the edges of a dynamic network might not be observed, but one can observe the dynamics of stochastic cascading processes (e. g. , information diffusion, virus propagation) occurring over the unobserved network. While there have been efforts to infer networks based on such data, providing a generative probabilistic model that is able to identify the underlying time-varying network remains an open question. Here we consider the problem of inferring generative dynamic network models based on network cascade diffusion data. We propose a novel framework for providing a non-parametric dynamic network model---based on a mixture of coupled hierarchical Dirichlet processes---based on data capturing cascade node infection times. Our approach allows us to infer the evolving community structure in networks and to obtain an explicit predictive distribution over the edges of the underlying network---including those that were not involved in transmission of any cascade, or are likely to appear in the future. We show the effectiveness of our approach using extensive experiments on synthetic as well as real-world networks.

IROS Conference 2015 Conference Paper

Generic sensor fusion package for ROS

  • Denise Ratasich
  • Bernhard Frömel
  • Oliver Höftberger
  • Radu Grosu

Sensor fusion combines multiple sensor measurements to improve a controller's knowledge about the internal state of an observed physical environment. Many such sensor fusion techniques exist and have been implemented for the Robot Operating System (ROS). However, they often have been developed for specific applications and cannot be easily reused for other applications. Reasons are the use of application-specific, partly undocumented interfaces, and the often limited reconfigurability caused by a tight coupling of the implementation to an application-specific purpose. Our approach is based on the concept of a fusion node which provides a configurable sensor fusion service with a generic interface. Fusion nodes can be interconnected to combine several sensor fusion techniques, can be attached to any single-dimension value sensor, can handle asynchronous multi-rate measurements and are robust regarding indeterministic, best-effort communication. This paper presents, to the best of our knowledge, the first generic sensor fusion package (GSFP) for ROS which collects various exemplary sensor fusion methods implemented as fusion nodes. We demonstrate the feasibility of our package in a small test application. Main benefits of our contribution are the developed ROS package's independence regarding specific sensors or applications, the easy integration of configurable fusion nodes in existing applications, and the composition of fusion nodes to realize complex sensor fusion scenarios.

MFCS Conference 2000 Invited Paper

And/Or Hierarchies and Round Abstraction

  • Radu Grosu

Abstract Sequential and parallel composition are the most fundamental operators for incremental construction of complex concurrent systems. They reflect the temporal and respectively the spatial properties of these systems. Hiding temporal detail like internal computation steps supports temporal scalability and may turn an asynchronous system to a synchronous one. Hiding spatial detail like internal variables supports spatial scalability and may turn a synchronous system to an asynchronous one. In this paper we show on hand of several examples that a language explicitly supporting both sequential and parallel composition operators is a natural setting for designing heterogeneous synchronous and asynchronous systems. The language we use is S hrm, a visual language that backs up the popular and/or hierarchies of statecharts with a well defined compositional semantics.