Arrow Research search

Author name cluster

Holger Voos

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

11 papers
2 author rows

Possible papers

11

IROS Conference 2025 Conference Paper

Category-level Meta-learned NeRF Priors for Efficient Object Mapping

  • Saad Ejaz
  • Hriday Bavle
  • Laura Ribeiro
  • Holger Voos
  • Jose Luis Sanchez-Lopez

In 3D object mapping, category-level priors enable efficient object reconstruction and canonical pose estimation, requiring only a single prior per semantic category (e. g. , chair, book, laptop, etc.). DeepSDF has been used predominantly as a category-level shape prior, but it struggles to reconstruct sharp geometry and is computationally expensive. In contrast, NeRFs capture fine details but have yet to be effectively integrated with category-level priors in a real-time multi-object mapping framework. To bridge this gap, we introduce PRENOM, a Prior-based Efficient Neural Object Mapper that integrates category-level priors with object-level NeRFs to enhance reconstruction efficiency and enable canonical object pose estimation. PRENOM gets to know objects on a first-name basis by meta-learning on synthetic reconstruction tasks generated from open-source shape datasets. To account for object category variations, it employs a multi-objective genetic algorithm to optimize the NeRF architecture for each category, balancing reconstruction quality and training time. Additionally, prior-based probabilistic ray sampling directs sampling toward expected object regions, accelerating convergence and improving reconstruction quality under constrained resources. Experimental results highlight the ability of PRENOM to achieve high-quality reconstructions while maintaining computational feasibility. Specifically, comparisons with prior-free NeRF-based approaches on a synthetic dataset show a 21% lower Chamfer distance. Furthermore, evaluations against other approaches using shape priors on a noisy real-world dataset indicate a 13% improvement averaged across all reconstruction metrics, and comparable pose and size estimation accuracy, while being trained for 5× less time. Code available at: https://github.com/snt-arg/PRENOM

IROS Conference 2025 Conference Paper

MPC-based Deep Reinforcement Learning Method for Space Robotic Control with Fuel Sloshing Mitigation

  • Mahya Ramezani
  • M. Amin Alandihallaj
  • Baris Can Yalçin
  • Miguel Angel Olivares-Méndez
  • Holger Voos

This paper presents an integrated Reinforcement Learning (RL) and Model Predictive Control (MPC) framework for autonomous satellite docking with a partially filled fuel tank. Traditional docking control faces challenges due to fuel sloshing in microgravity, which induces unpredictable forces affecting stability. To address this, we integrate Proximal Policy Optimization (PPO) and Soft Actor-Critic (SAC) RL algorithms with MPC, leveraging MPC’s predictive capabilities to accelerate RL training and improve control robustness. The proposed approach is validated through Zero-G Lab of SnT experiments for planar stabilization and high-fidelity numerical simulations for 6-DOF docking with fuel sloshing dynamics. Simulation results demonstrate that SAC-MPC achieves superior docking accuracy, higher success rates, and lower control effort, outperforming standalone RL and PPO-MPC methods. This study advances fuel-efficient and disturbance-resilient satellite docking, enhancing the feasibility of on-orbit refueling and servicing missions.

IROS Conference 2024 Conference Paper

Learning High-level Semantic-Relational Concepts for SLAM

  • Jose Andres Millan-Romera
  • Hriday Bavle
  • Muhammad Shaheer
  • Martin R. Oswald
  • Holger Voos
  • Jose Luis Sanchez-Lopez

Recent works on SLAM extend their pose graphs with higher-level semantic concepts like Rooms exploiting relationships between them, to provide, not only a richer representation of the situation/environment but also to improve the accuracy of its estimation. Concretely, our previous work, Situational Graphs (S-Graphs+), a pioneer in jointly leveraging semantic relationships in the factor optimization process, relies on semantic entities such as Planes and Rooms, whose relationship is mathematically defined. Nevertheless, there is no unique approach to finding all the hidden patterns in lower-level factor-graphs that correspond to high-level concepts of different natures. It is currently tackled with ad-hoc algorithms, which limits its graph expressiveness. To overcome this limitation, in this work, we propose an algorithm based on Graph Neural Networks for learning high-level semantic-relational concepts that can be inferred from the low-level factor graph. Given a set of mapped Planes our algorithm is capable of inferring Room entities relating to the Planes. Additionally, to demonstrate the versatility of our method, our algorithm can infer an additional semantic-relational concept, i. e. Wall, and its relationship with its Planes. We validate our method in both simulated and real datasets demonstrating improved performance over two baseline approaches. Furthermore, we integrate our method into the S-Graphs+ algorithm providing improved pose and map accuracy compared to the baseline while further enhancing the scene representation.

IROS Conference 2023 Conference Paper

Graph-Based Global Robot Localization Informing Situational Graphs with Architectural Graphs

  • Muhammad Shaheer
  • Jose Andres Millan-Romera
  • Hriday Bavle
  • Jose Luis Sanchez-Lopez
  • Javier Civera 0001
  • Holger Voos

In this paper, we propose a solution for legged robot localization using architectural plans. Our specific contributions towards this goal are several. Firstly, we develop a method for converting the plan of a building into what we denote as an architectural graph (A-Graph). When the robot starts moving in an environment, we assume it has no knowledge about it, and it estimates an online situational graph representation (S-Graph) of its surroundings. We develop a novel graph-to-graph matching method, in order to relate the S-Graph estimated online from the robot sensors and the A-Graph extracted from the building plans. Note the challenge in this, as the S-Graph may show a partial view of the full A-Graph, their nodes are heterogeneous and their reference frames are different. After the matching, both graphs are aligned and merged, resulting in what we denote as an informed Situational Graph (is-Graph), with which we achieve global robot localization and exploitation of prior knowledge from the building plans. Our experiments show that our pipeline shows a higher robustness and a significantly lower pose error than several LiDAR localization baselines. Paper Video: https://youtu.be/3Pv7y8aOsUY

IROS Conference 2023 Conference Paper

Marker-Based Visual SLAM Leveraging Hierarchical Representations

  • Ali Tourani
  • Hriday Bavle
  • Jose Luis Sanchez-Lopez
  • Rafael Muñoz-Salinas
  • Holger Voos

Fiducial markers can encode rich information about the environment and aid Visual SLAM (VSLAM) approaches in reconstructing maps with practical semantic information. Current marker-based VSLAM approaches mainly utilize markers for improving feature detections in low-feature environments and/or incorporating loop closure constraints, generating only low-level geometric maps of the environment prone to inaccuracies in complex environments. To bridge this gap, this paper presents a VSLAM approach utilizing a monocular camera along with fiducial markers to generate hierarchical representations of the environment while improving the camera pose estimate. The proposed approach detects semantic entities from the surroundings, including walls, corridors, and rooms encoded within markers, and appropriately adds topological constraints among them. Experimental results on a real-world dataset collected with a robot demonstrate that the proposed approach outperforms a marker-based VSLAM baseline in terms of accuracy, given the addition of new constraints while creating enhanced map representations. Furthermore, it shows satisfactory results when comparing the reconstructed map quality to the one rebuilt using a LiDAR SLAM approach.

IROS Conference 2019 Conference Paper

Arguing Security of Autonomous Robots

  • Nico Hochgeschwender
  • Gary Cornelius
  • Holger Voos

Autonomous robots are already being used, for example, as tour guides, receptionists, or office-assistants. The proximity to humans and the possibility to physically interact with them highlights the importance of developing secure robot applications. It is crucial to consider security implications to be an important part of the robot application’s development process. Adding security later in the application’s life-cycle usually leads to high costs, or is not possible due to earlier design decisions. In this work, we present the Robot Application Security Process (RASP) as a lightweight process that enables the development of secure robot applications. Together with RASP we introduce the role of a Security Engineer (SecEng) as an important stakeholder in any robot application development process. RASP enables the SecEng to verify the completeness of his work and allows him to argue about the application’s security with other stakeholders. Furthermore, we demonstrate how the RASP supports the SecEng and also other developers in their daily work.

IROS Conference 2015 Conference Paper

An approach for a distributed world model with QoS-based perception algorithm adaptation

  • Sebastian Blumenthal
  • Nico Hochgeschwender
  • Erwin Prassler
  • Holger Voos
  • Herman Bruyninckx

This paper presents a distributed world model that is able to adapt to changes in the Quality of Service (QoS) of the communication layer by online reconfiguration of perception algorithms. The approach consists of (a) a mechanism for storage, exchange and processing of world model data and (b) a feedback loop that incorporates reasoning techniques to adapt to QoS changes immediately. The latter introduces a Level of Detail (LoD) metric based on a spatial resolution in order to infer an upper bound for the amount of data that can be transmitted without violating an application specific transmission delay. Experiments have been performed with Octree-based subsampling techniques applied to data originating from a RGB-D camera using simulated and real-world data sets for timevarying bandwidth values as employed QoS measure.

FLAP Journal 2015 Journal Article

Retalis Language for Information Engineering in Autonomous Robot Software.

  • Pouyan Ziafati
  • Mehdi Dastani
  • John-Jules Ch. Meyer
  • Leon van der Torre
  • Holger Voos

Robotic information engineering is the processing and management of data to create knowledge of the robot’s environment. It is an essential robotic tech- nique to apply AI methods such as situation awareness, task-level planning and knowledge-intensive task execution. Consequently, information engineering has been identified as a major challenge to make robotic systems more responsive to real-world situations. The Retalis language integrates ELE and SLR, two logic-based languages. Retalis is used to develop information engineering com- ponents of autonomous robots. In such a component, ELE is used for temporal and logical reasoning, and data transformation in flows of data. SLR is used to implement a knowledge base maintaining a history of events. SLR supports state-based representation of knowledge built upon discrete sensory data, man- agement of sensory data in active memories and synchronization of queries over asynchronous sensory data. In this paper, we introduce eight requirements for robotic information engineering, and we show how Retalis unifies and advances the state-of-the-art research on robotic information engineering. Moreover, we evaluate the efficiency of Retalis by implementing an application for a NAO robot. Retalis receives events about the positions of objects with respect to the top camera of NAO robot, the transformation among the coordinate frames of NAO robot, and the location of the NAO robot in the environment. About one thousand and nine hundreds events per second are processed in real-time to calculate the positions of objects in the environment.

IROS Conference 2012 Conference Paper

Towards learning of safety knowledge from human demonstrations

  • Philipp Ertle
  • Michel Tokic
  • Richard Cubek
  • Holger Voos
  • Dirk Söffker

Future autonomous service robots are intended to operate in open and complex environments. This in turn implies complications ensuring safe operation. The tenor of few available investigations is the need for dynamically assessing operational risks. Furthermore, a new kind of hazards being implicated by the robot's capability to manipulate the environment occurs: hazardous environmental object interactions. One of the open questions in safety research is integrating safety knowledge into robotic systems, enabling these systems behaving safety-conscious in hazardous situations. In this paper a safety procedure is described, in which learning of safety knowledge from human demonstration is considered. Within the procedure, a task is demonstrated to the robot, which observes object-to-object relations and labels situational data as commanded by the human. Based on this data, several supervised learning techniques are evaluated used for finally extracting safety knowledge. Results indicate that Decision Trees allow interesting opportunities.

AAMAS Conference 2008 Conference Paper

OpCog: An Industrial Development Approach for Cognitive Agent Systems in Military UAV Applications (Short Paper)

  • Kai Reichel
  • Nico Hochgeschwender
  • Holger Voos

Future applications of unmanned aerial vehicles (UAVs) especially in military missions require the operation of UAVs with a high level of autonomy. Autonomous UAVs could be developed using agent technologies and therefore this paper investigates such an approach from an industrial perspective. Taking into account time, budget and available knowledge on the industrial side and need for UAV operators to understand the behavior of the autonomous system this paper proposes the application of cognitive agents and a design procedure that supports the transition of the pure operational requirements and functional specification into a cognitive agent system, called Operational driven development approach for Cognitive Systems(OpCog).