Arrow Research search

Author name cluster

Jiabin Liu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

6 papers
2 author rows

Possible papers

6

AAAI Conference 2026 Conference Paper

Manipulating the Mind’s Eye: A-SAGE, the Attention-Based Attack on ViT Explainability

  • Boshi Zheng
  • Yan Li
  • Jiabin Liu

The rise of Vision Transformers (ViTs) as cornerstone models in safety-critical applications like autonomous driving and medical diagnosis has shifted the focus from pure accuracy to verifiable trustworthiness. However, the very mechanisms used to explain these models, their internal attention maps, are themselves vulnerable. This creates a critical "trust gap," as the model's apparent reasoning can be maliciously manipulated. To systematically investigate this vulnerability, we introduce A-SAGE (Attention-based Steering Adversarial Generation by Corrupting Explanations), a dual-objective attack framework that forces a model to misclassify an input while simultaneously corrupting its internal attention patterns to generate a misleading explanation. A-SAGE achieves this by optimizing a unified loss that combines a standard classification objective with two explanation-specific terms: an attention entropy loss to diffuse the model's focus and an attention map distortion loss to steer the corrupted explanation towards a desired target. Our primary finding is A-SAGE's exceptional black-box transferability. Using a CaiT-S as a white-box surrogate, adversarial examples generated with imperceptible perturbations achieve attack success rates of 79.4% on ViT-B, 49.7% on ResNet-50, and over 81.5% on other transformers (DeiT-B,TNT-S). Crucially, these successful attacks do not merely destroy the explanation; they generate a coherent but false attention map that deceptively "justifies" the wrong prediction. These results reveal a systemic vulnerability in the core reasoning of modern foundation models, establishing A-SAGE as a critical benchmark for auditing the robustness of AI explainability.

IROS Conference 2025 Conference Paper

A Mole-inspired Incisor-Burrowing Robotic Platform for Planetary Exploration

  • Ran Xu
  • Jiabin Liu
  • Zhaofeng Liang
  • Hongmin Zheng
  • Kunquan Zheng
  • Zibiao Chen
  • Jiawei Chen
  • Tao Zhang 0064

Planetary exploration requires efficient methods for subsurface sampling, especially in extreme energy limitations. Traditional drilling methods are often energy intensive and require large platforms, limiting their applicability. Bio-inspired burrowing techniques, inspired by animals like moles, offer lightweight, low-power alternatives suitable for small robotic platforms. This paper presents a novel bio-inspired robotic platform, the Mole-like Incisor-Burrowing Robotic Platform (MIRP), designed to mimic the incisor-burrowing behavior of naked mole rats. The MIRP features an 11 DOFs mechanism with a compact design (220 mm × 140 mm × 80 mm) and uses servomotors to achieve low energy consumption. The robot combines a qu0adrupedal locomotion mechanism with an incisor-burrowing mechanism, allowing it to navigate granular terrains and perform excavation tasks. Kinematic analysis, including inverse kinematics and close-chain analysis, was conducted to optimize the robot’s motion strategy. A prototype was developed and tested in a simulated lunar regolith environment to test its maneuverability and burrowing performance. The power consumption of the prototype is below 10 W. This work validates the feasibility of bio-inspired incisor-burrowing for planetary exploration, offering a cost-effective and efficient solution for future extraterrestrial missions.

AAAI Conference 2025 Conference Paper

MLAAN: Scaling Supervised Local Learning with Multilaminar Leap Augmented Auxiliary Network

  • Yuming Zhang
  • Shouxin Zhang
  • Peizhe Wang
  • Feiyu Zhu
  • Dongzhi Guan
  • Junhao Su
  • Jiabin Liu
  • Changpeng Cai

Deep neural networks (DNNs) typically employ an end-to-end (E2E) training paradigm which presents several challenges, including high GPU memory consumption, inefficiency, and difficulties in model parallelization during training. Recent research has sought to address these issues, with one promising approach being local learning. This method involves partitioning the backbone network into gradient-isolated modules and manually designing auxiliary networks to train these local modules. Existing methods often neglect the interaction of information between local modules, leading to myopic issues and a performance gap compared to E2E training. To address these limitations, we propose the Multilaminar Leap Augmented Auxiliary Network (MLAAN). Specifically, MLAAN comprises Multilaminar Local Modules (MLM) and Leap Augmented Modules (LAM). MLM captures both local and global features through independent and cascaded auxiliary networks, alleviating performance issues caused by insufficient global features. However, overly simplistic auxiliary networks can impede MLM's ability to capture global information. To address this, we further design LAM, an enhanced auxiliary network that uses the Exponential Moving Average (EMA) method to facilitate information exchange between local modules, thereby mitigating the shortsightedness resulting from inadequate interaction. The synergy between MLM and LAM has demonstrated excellent performance. Our experiments on the CIFAR-10, STL-10, SVHN, and ImageNet datasets show that MLAAN can be seamlessly integrated into existing local learning frameworks, significantly enhancing their performance and even surpassing end-to-end (E2E) training methods, while also reducing GPU memory consumption.

ICML Conference 2021 Conference Paper

Leveraged Weighted Loss for Partial Label Learning

  • Hongwei Wen
  • Jingyi Cui
  • Hanyuan Hang
  • Jiabin Liu
  • Yisen Wang 0001
  • Zhouchen Lin

As an important branch of weakly supervised learning, partial label learning deals with data where each instance is assigned with a set of candidate labels, whereas only one of them is true. Despite many methodology studies on learning from partial labels, there still lacks theoretical understandings of their risk consistent properties under relatively weak assumptions, especially on the link between theoretical results and the empirical choice of parameters. In this paper, we propose a family of loss functions named \textit{Leveraged Weighted} (LW) loss, which for the first time introduces the leverage parameter $\beta$ to consider the trade-off between losses on partial labels and non-partial ones. From the theoretical side, we derive a generalized result of risk consistency for the LW loss in learning from partial labels, based on which we provide guidance to the choice of the leverage parameter $\beta$. In experiments, we verify the theoretical guidance, and show the high effectiveness of our proposed LW loss on both benchmark and real datasets compared with other state-of-the-art partial label learning algorithms.

IJCAI Conference 2021 Conference Paper

Two-stage Training for Learning from Label Proportions

  • Jiabin Liu
  • Bo Wang
  • Xin Shen
  • Zhiquan Qi
  • YingJie Tian

Learning from label proportions (LLP) aims at learning an instance-level classifier with label proportions in grouped training data. Existing deep learning based LLP methods utilize end-to-end pipelines to obtain the proportional loss with Kullback-Leibler divergence between the bag-level prior and posterior class distributions. However, the unconstrained optimization on this objective can hardly reach a solution in accordance with the given proportions. Besides, concerning the probabilistic classifier, this strategy unavoidably results in high-entropy conditional class distributions at the instance level. These issues further degrade the performance of the instance-level classification. In this paper, we regard these problems as noisy pseudo labeling, and instead impose the strict proportion consistency on the classifier with a constrained optimization as a continuous training stage for existing LLP classifiers. In addition, we introduce the mixup strategy and symmetric cross-entropy to further reduce the label noise. Our framework is model-agnostic, and demonstrates compelling performance improvement in extensive experiments, when incorporated into other deep LLP models as a post-hoc phase.

NeurIPS Conference 2019 Conference Paper

Learning from Label Proportions with Generative Adversarial Networks

  • Jiabin Liu
  • Bo Wang
  • Zhiquan Qi
  • YingJie Tian
  • Yong Shi

In this paper, we leverage generative adversarial networks (GANs) to derive an effective algorithm LLP-GAN for learning from label proportions (LLP), where only the bag-level proportional information in labels is available. Endowed with end-to-end structure, LLP-GAN performs approximation in the light of an adversarial learning mechanism, without imposing restricted assumptions on distribution. Accordingly, we can directly induce the final instance-level classifier upon the discriminator. Under mild assumptions, we give the explicit generative representation and prove the global optimality for LLP-GAN. Additionally, compared with existing methods, our work empowers LLP solver with capable scalability inheriting from deep models. Several experiments on benchmark datasets demonstrate vivid advantages of the proposed approach.