Arrow Research search

Author name cluster

Xinjie wang

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
2 author rows

Possible papers

4

NeurIPS Conference 2025 Conference Paper

DIPO: Dual-State Images Controlled Articulated Object Generation Powered by Diverse Data

  • Ruiqi Wu
  • Xinjie wang
  • Chun-Le Guo
  • Jiaxiong Qiu
  • Chongyi Li
  • Lichao Huang
  • Zhizhong Su
  • Ming-Ming Cheng

We present DIPO, a novel framework for the controllable generation of articulated 3D objects from a pair of images: one depicting the object in a resting state and the other in an articulated state. Compared to the single-image approach, our dual-image input imposes only a modest overhead for data collection, but at the same time provides important motion information, which is a reliable guide for predicting kinematic relationships between parts. Specifically, we propose a dual-image diffusion model that captures relationships between the image pair to generate part layouts and joint parameters. In addition, we introduce a Chain-of-Thought (CoT) based graph reasoner that explicitly infers part connectivity relationships. To further improve robustness and generalization on complex articulated objects, we develop a fully automated dataset expansion pipeline, name LEGO-Art, that enriches the diversity and complexity of PartNet-Mobility dataset. We propose PM-X, a large-scale dataset of complex articulated 3D objects, accompanied by rendered images, URDF annotations, and textual descriptions. Extensive experiments demonstrate that DIPO significantly outperforms existing baselines in both the resting state and the articulated state, while the proposed PM-X dataset further enhances generalization to diverse and structurally complex articulated objects. Our code and dataset are available at https: //github. com/RQ-Wu/DIPO.

IROS Conference 2025 Conference Paper

GeoFlow-SLAM: A Robust Tightly-Coupled RGBD-Inertial and Legged Odometry Fusion SLAM for Dynamic Legged Robotics

  • Tingyang Xiao
  • Xiaolin Zhou
  • Liu Liu
  • Wei Sui
  • Wei Feng
  • Jiaxiong Qiu
  • Xinjie Wang
  • Zhizhong Su

This paper presents GeoFlow-SLAM, a robust and effective Tightly-Coupled RGBD-Inertial and Legged Odometry Fusion SLAM for legged robotics undergoing aggressive and high-frequency motions. By integrating geometric consistency, legged odometry constraints, and dual-stream optical flow (GeoFlow), our method addresses three critical challenges: feature matching and pose initialization failures during fast locomotion and visual feature scarcity in texture-less scenes. Specifically, in rapid motion scenarios, feature matching is notably enhanced by leveraging dual-stream optical flow, which combines prior map points and poses. Additionally, we propose a robust pose initialization method for fast locomotion and IMU error in legged robots, integrating IMU/Legged odometry, inter-frame Perspective-n-Point (PnP), and Generalized Iterative Closest Point (GICP). Furthermore, a novel optimization framework that tightly couples depth-to-map and GICP geometric constraints is first introduced to improve the robustness and accuracy in long-duration, visually texture-less environments. The proposed algorithms achieve state-of-the-art (SOTA) on collected legged robots and open-source datasets. To further promote research and development, the open-source datasets and code will be made publicly available at https://github.com/HorizonRobotics/GeoFlowSlam.

ICRA Conference 2021 Conference Paper

Real-time Instance Detection with Fast Incremental Learning

  • Richard Bormann
  • Xinjie Wang
  • Markus Völk
  • Kilian Kleeberger
  • Jochen Lindermayr

Object instance detection is a highly relevant task to several robotic applications such as automated order picking, or household and hospital assistance robots. In these applications, a holistic scene labeling is often not required whereas it is sufficient to find a certain object type of interest, e. g. for picking it up. At the same time, large and continuously changing object sets are characteristic in such applications, requiring efficient model update capabilities from the object detector. Today’s monolithic multi-class detectors do not fulfill this criterion for fast and flexible model updates. This paper introduces InstanceNet, an ensemble of efficient single-class instance detectors capable of fast and incremental adaptation to new object sets. Due to a dynamic sampling-based training strategy, accurate detection models for new objects can be obtained within less than 40 minutes on a consumer GPU while only a small percentage of the existing detection models needs to be updated in a very efficient manner. The new detector has been thoroughly evaluated on the basis of a novel dataset of 100 grocery store objects.

ICRA Conference 2020 Conference Paper

DirtNet: Visual Dirt Detection for Autonomous Cleaning Robots

  • Richard Bormann
  • Xinjie Wang
  • Jiawen Xu 0001
  • Joel Schmidt

Visual dirt detection is becoming an important capability of modern professional cleaning robots both for optimizing their wet cleaning results and for facilitating demand-oriented daily vacuum cleaning. This paper presents a robust, fast, and reliable dirt and office item detection system for these tasks based on an adapted YOLOv3 framework. Its superiority over state-of-the-art dirt detection systems is demonstrated in several experiments. The paper furthermore features a dataset generator for creating any number of realistic training images from a small set of real scene, dirt, and object examples.