Arrow Research search

Author name cluster

Shiyuan Chen

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
2 author rows

Possible papers

4

AAAI Conference 2026 Conference Paper

Unleashing Semantic and Geometric Priors for 3D Scene Completion

  • Shiyuan Chen
  • Wei Sui
  • Bohao Zhang
  • Zeyd Boukhers
  • John See
  • Cong Yang

Camera-based 3D semantic scene completion (SSC) provides dense geometric and semantic perception for autonomous driving and robotic navigation. However, existing methods rely on a coupled encoder to deliver both semantic and geometric priors, which forces the model to make a trade-off between conflicting demands and limits its overall performance. To tackle these challenges, we propose FoundationSSC, a novel framework that performs dual decoupling at both the source and pathway levels. At the source level, we introduce a foundation encoder that provides rich semantic feature priors for the semantic branch and high-fidelity stereo cost volumes for the geometric branch. At the pathway level, these priors are refined through specialised, decoupled pathways, yielding superior semantic context and depth distributions. Our dual-decoupling design produces disentangled and refined inputs, which are then utilised by a hybrid view transformation to generate complementary 3D features. Additionally, we introduce a novel Axis-Aware Fusion (AAF) module that addresses the often-overlooked challenge of fusing these features by anisotropically merging them into a unified representation. Extensive experiments demonstrate the advantages of FoundationSSC, achieving simultaneous improvements in both semantic and geometric metrics, surpassing prior bests by +0.23 mIoU and +2.03 IoU on SemanticKITTI. Additionally, we achieve state-of-the-art performance on SSCBench-KITTI-360, with 21.78 mIoU and 48.61 IoU.

ICRA Conference 2024 Conference Paper

A Vision-Centric Approach for Static Map Element Annotation

  • Jiaxin Zhang 0014
  • Shiyuan Chen
  • Haoran Yin
  • Ruohong Mei
  • Xuan Liu
  • Cong Yang
  • Qian Zhang 0009
  • Wei Sui

The recent development of online static map element (a. k. a. HD Map) construction algorithms has raised a vast demand for data with ground truth annotations. However, available public datasets currently cannot provide high-quality training data regarding consistency and accuracy. To this end, we present CAMA: a vision-centric approach for Consistent and Accurate Map Annotation. Without LiDAR inputs, our proposed framework can still generate high-quality 3D annotations of static map elements. Specifically, the annotation can achieve high reprojection accuracy across all surrounding cameras and is spatial-temporal consistent across the whole sequence. We apply our proposed framework to the popular nuScenes dataset to provide efficient and highly accurate annotations. Compared with the original nuScenes static map element, models trained with annotations from CAMA achieve lower reprojection errors (e. g. , 4. 73 vs. 8. 03 pixels).

IROS Conference 2017 Conference Paper

The datum particle filter: Localization for objects with coupled geometric datums

  • Shiyuan Chen
  • Brad Saund
  • Reid G. Simmons

In this paper, we propose a touch-based localization approach for a potentially large and complex object with multiple internal degrees of freedom. Should a task only require a partial localization of the object, our method selects the appropriate information gathering actions to register the desired features. We use probabilistic methods to reason over the distribution of the estimated object poses in the 6-DOF configuration space. We introduce the datum-based particle filter to handle intrinsic tolerances between each of the sections of the object. We describe two alternative methods for the particle filter system: one using the full joint belief and the other reasonably simplifying the belief to achieve a better ability to scale. We present simulation results for both proposed methods to show the advantages of our approaches.

ICRA Conference 2017 Conference Paper

Touch based localization of parts for high precision manufacturing

  • Brad Saund
  • Shiyuan Chen
  • Reid G. Simmons

Performing detailed work on objects requires precise localization. Currently humans aid machines in localization either by direct operation, or implicitly by designing a sequence of actions a robot follows. Our approach to automate localization is to reason over many potential actions, perform the best information gathering action, and then use the measurement obtained to update a non-Gaussian belief. We propose a method for autonomous localization of objects with initial 6DOF uncertainty capable of reasoning about and performing measurements with low uncertainty and arbitrary error models. Surprisingly, common methods capable of modeling arbitrary belief distributions perform poorly as measurement uncertainty decreases, so we modify a particle filter to handle these accurate measurements produced by tactile or laser sensors. We then show how the expected information gain of the proposed measurement can be calculated efficiently from these particles. We present experiments, both in simulation and on hardware, that show our method is both fast and accurate.