Arrow Research search

Author name cluster

Haixia Bi

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

AAAI Conference 2026 Conference Paper

HeadHunt-VAD: Hunting Robust Anomaly-Sensitive Heads in MLLM for Tuning-Free Video Anomaly Detection

  • Zhaolin Cai
  • Fan Li
  • Ziwei Zheng
  • Haixia Bi
  • Lijun He

Video Anomaly Detection (VAD) aims to locate events that deviate from normal patterns in videos. Traditional approaches often rely on extensive labeled data and incur high computational costs. Recent tuning-free methods based on Multimodal Large Language Models (MLLMs) offer a promising alternative by leveraging their rich world knowledge. However, these methods typically rely on textual outputs, which introduces information loss, exhibits normalcy bias, and suffers from prompt sensitivity, making them insufficient for capturing subtle anomalous cues. To address these constraints, we propose HeadHunt-VAD, a novel tuning-free VAD paradigm that bypasses textual generation by directly hunting robust anomaly-sensitive internal attention heads within the frozen MLLM. Central to our method is a Robust Head Identification module that systematically evaluates all attention heads using a multi-criteria analysis of saliency and stability, identifying a sparse subset of heads that are consistently discriminative across diverse prompts. Features from these expert heads are then fed into a lightweight anomaly scorer and a temporal locator, enabling efficient and accurate anomaly detection with interpretable outputs. Extensive experiments show that HeadHunt-VAD achieves state-of-the-art performance among tuning-free methods on two major VAD benchmarks while maintaining high efficiency, validating head-level probing in MLLMs as a powerful and practical solution for real-world anomaly detection.

AAAI Conference 2026 Conference Paper

Invisible Triggers, Visible Threats! Road-Style Adversarial Creation Attack for Visual 3D Detection in Autonomous Driving

  • Jian Wang
  • Lijun He
  • Yixing Yong
  • Haixia Bi
  • Fan Li

Modern autonomous driving (AD) systems leverage 3D object detection to perceive foreground objects in 3D environments for subsequent prediction and planning. Visual 3D detection based on RGB cameras provides a cost-effective solution compared to the LiDAR paradigm. While achieving promising detection accuracy, current deep neural network-based models remain highly susceptible to adversarial examples. The underlying safety concerns motivate us to investigate realistic adversarial attacks in AD scenarios. Previous work has demonstrated the feasibility of placing adversarial posters on the road surface to induce hallucinations in the detector. However, the unnatural appearance of the posters makes them easily noticeable by humans, and their fixed content can be readily targeted and defended. To address these limitations, we propose the AdvRoad to generate diverse road-style adversarial posters. The adversaries have naturalistic appearances resembling the road surface while compromising the detector to perceive non-existent objects at the attack locations. We employ a two-stage approach, termed Road-Style Adversary Generation and Scenario-Associated Adaptation, to maximize the attack effectiveness on the input scene while ensuring the natural appearance of the poster, allowing the attack to be carried out stealthily without drawing human attention. Extensive experiments show that AdvRoad generalizes well to different detectors, scenes, and spoofing locations. Moreover, physical attacks further demonstrate the practical threats in real-world environments.

JBHI Journal 2021 Journal Article

Human Activity Recognition Based on Dynamic Active Learning

  • Haixia Bi
  • Miquel Perello-Nieto
  • Raul Santos-Rodriguez
  • Peter Flach

Activity of daily living is an important indicator of the health status and functional capabilities of an individual. Activity recognition, which aims at understanding the behavioral patterns of people, has increasingly received attention in recent years. However, there are still a number of challenges confronting the task. First, labelling training data is expensive and time-consuming, leading to limited availability of annotations. Secondly, activities performed by individuals have considerable variability, which renders the generally used supervised learning with a fixed label set unsuitable. To address these issues, we propose a dynamic active learning-based activity recognition method in this work. Different from traditional active learning methods which select samples based on a fixed label set, the proposed method not only selects informative samples from known classes, but also dynamically identifies new activities which are not included in the predefined label set. Starting with a classifier that has access to a limited number of labelled samples, we iteratively extend the training set with informative labels by fully considering the uncertainty, diversity and representativeness of samples, based on which better-informed classifiers can be trained, further reducing the annotation cost. We evaluate the proposed method on two synthetic datasets and two existing benchmark datasets. Experimental results demonstrate that our method not only boosts the activity recognition performance with considerably reduced annotation cost, but also enables adaptive daily activity analysis allowing the presence and detection of novel activities and patterns.