Arrow Research search

Author name cluster

Weihua Wang

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
2 author rows

Possible papers

4

AAAI Conference 2026 Conference Paper

Towards 3D Object-Centric Feature Learning for Semantic Scene Completion

  • Weihua Wang
  • Yubo Cui
  • Xiangru Lin
  • Zhiheng Li
  • Zheng Fang

Vision-based 3D Semantic Scene Completion (SSC) has received growing attention due to its potential in autonomous driving. While most existing approaches follow an ego-centric paradigm by aggregating and diffusing features over the entire scene, they often overlook fine-grained object-level details, leading to semantic and geometric ambiguities, especially in complex environments. To address this limitation, we propose Ocean, an object-centric prediction framework that decomposes the scene into individual object instances to enable more accurate semantic occupancy prediction. Specifically, we first employ a lightweight segmentation model, MobileSAM, to extract instance masks from the input image. Then, we introduce a 3D Semantic Group Attention module that leverages linear attention to aggregate object-centric features in 3D space. To handle segmentation errors and missing instances, we further design a Global Similarity-Guided Attention module that leverages segmentation features for global interaction. Finally, we propose an Instance-aware Local Diffusion module that improves instance features through a generative process and subsequently refines the scene representation in the BEV space. Extensive experiments on the SemanticKITTI and SSCBench-KITTI360 benchmarks demonstrate that Ocean achieves state-of-the-art performance, with mIoU scores of 17.40 and 20.28, respectively.

ECAI Conference 2024 Conference Paper

Fully Hyperbolic Rotation for Knowledge Graph Embedding

  • Qiuyu Liang
  • Weihua Wang
  • Feilong Bao
  • Guanglai Gao

Hyperbolic rotation is commonly used to effectively model knowledge graphs and their inherent hierarchies. However, existing hyperbolic rotation models rely on logarithmic and exponential mappings for feature transformation. These models only project data features into hyperbolic space for rotation, limiting their ability to fully exploit the hyperbolic space. To address this problem, we propose a novel fully hyperbolic model designed for knowledge graph embedding. Instead of feature mappings, we define the model directly in hyperbolic space with the Lorentz model. Our model considers each relation in knowledge graphs as a Lorentz rotation from the head entity to the tail entity. We adopt the Lorentzian version distance as the scoring function for measuring the plausibility of triplets. Extensive results on standard knowledge graph completion benchmarks demonstrated that our model achieves competitive results with fewer parameters. In addition, our model get the state-of-the-art performance on datasets of CoDEx-s and CoDEx-m, which are more diverse and challenging than before. Our code is available at https: //github. com/llqy123/FHRE.

ICRA Conference 2023 Conference Paper

Natural Language Instruction Understanding for Robotic Manipulation: a Multisensory Perception Approach

  • Weihua Wang
  • Xiaofei Li
  • Yanzhi Dong
  • Jun Xie
  • Di Guo 0002
  • Huaping Liu 0001

It has always been expected that the robot can understand the natural language instruction and thus a more natural human-robot interaction is achieved. Currently, the robot usually interprets the instruction by visually grounding the textual information to its surroundings, while it may be not enough for some complex situations with only visual perception. So it is reasonable for the robot to leverage its multisensory perception ability to better understand the instruction. In this paper, we propose a multisensory perception approach to tackle the task of natural language instruction understanding for robotic manipulation, in which the robot coordinates its visual, tactile and auditory perception to fully understand the instruction and then executes the manipulation task. Extensive experiments have been conducted demonstrating the superiority of the multisensory perception compared with single sensory perception for instruction understanding. Moreover, we establish a user-friendly human-robot interaction interface where the human sends instruction to the robot via a mobile APP.

IROS Conference 2001 Conference Paper

Multi-batch micro-self-assembly via controlled capillary forces

  • Xiaorong Xiong
  • Yael Hanein
  • Weihua Wang
  • Daniel T. Schwartz
  • Karl-Friedrich Böhringer

Advances in silicon processing and microelectro-mechanical systems (MEMS) have made possible the production of very large numbers of very small components at very low cost in massively parallel batches. Assembly, in contrast, remains a mostly serial (i. e. , non-batch) technique. We argue that massively parallel self-assembly of microparts will be a crucial enabling technology for future complex microsystems. As a specific approach, we present a technique for assembly of multiple batches of microparts based on capillary forces and controlled modulation of surface hydrophobicity. We derive a simplified model that gives rise to geometric algorithms for predicting assembly forces and for guiding the design optimization of self-assembling microparts. Promising initial results from theory and experiments and challenging open problems are presented to lay a foundation for general models and algorithms for self-assembly.