Arrow Research search

Author name cluster

Wei Wan

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

7 papers
1 author row

Possible papers

7

AAAI Conference 2025 Conference Paper

Breaking Barriers in Physical-World Adversarial Examples: Improving Robustness and Transferability via Robust Feature

  • Yichen Wang
  • Yuxuan Chou
  • Ziqi Zhou
  • Hangtao Zhang
  • Wei Wan
  • Shengshan Hu
  • Minghui Li

As deep neural networks (DNNs) are widely applied in the physical world, many researches are focusing on physical-world adversarial examples (PAEs), which introduce perturbations to inputs and cause the model's incorrect outputs. However, existing PAEs face two challenges: unsatisfactory attack performance (i.e., poor transferability and insufficient robustness to environment conditions), and difficulty in balancing attack effectiveness with stealthiness, where better attack effectiveness often makes PAEs more perceptible. In this paper, we explore a novel perturbation-based method to overcome the challenges. For the first challenge, we introduce a strategy Deceptive RF injection based on robust features (RFs) that are predictive, robust to perturbations, and consistent across different models. Specifically, it improves the transferability and robustness of PAEs by covering RFs of other classes onto the predictive features in clean images. For the second challenge, we introduce another strategy Adversarial Semantic Pattern Minimization, which removes most perturbations and retains only essential adversarial patterns in AEs. Based on the two strategies, we design our method Robust Feature Coverage Attack (RFCoA), comprising Robust Feature Disentanglement and Adversarial Feature Fusion. In the first stage, we extract target class RFs in feature space. In the second stage, we use attention-based feature fusion to overlay these RFs onto predictive features of clean images and remove unnecessary perturbations. Experiments show our method's superior transferability, robustness, and stealthiness compared to existing state-of-the-art methods. Additionally, our method's effectiveness can extend to Large Vision-Language Models (LVLMs), indicating its potential applicability to more complex tasks.

NeurIPS Conference 2025 Conference Paper

MARS: A Malignity-Aware Backdoor Defense in Federated Learning

  • Wei Wan
  • Ning Yuxuan
  • Zhicong Huang
  • Cheng Hong
  • Shengshan Hu
  • Ziqi Zhou
  • Yechao Zhang
  • Tianqing Zhu

Federated Learning (FL) is a distributed paradigm aimed at protecting participant data privacy by exchanging model parameters to achieve high-quality model training. However, this distributed nature also makes FL highly vulnerable to backdoor attacks. Notably, the recently proposed state-of-the-art (SOTA) attack, 3DFed (SP2023), uses an indicator mechanism to determine whether the backdoor models have been accepted by the defender and adaptively optimizes backdoor models, rendering existing defenses ineffective. In this paper, we first reveal that the failure of existing defenses lies in the employment of empirical statistical measures that are loosely coupled with backdoor attacks. Motivated by this, we propose a Malignity-Aware backdooR defenSe (MARS) that leverages backdoor energy (BE) to indicate the malicious extent of each neuron. To amplify malignity, we further extract the most prominent BE values from each model to form a concentrated backdoor energy (CBE). Finally, a novel Wasserstein distance-based clustering method is introduced to effectively identify backdoor models. Extensive experiments demonstrate that MARS can defend against SOTA backdoor attacks and significantly outperforms existing defenses.

AAAI Conference 2025 Conference Paper

NumbOD: A Spatial-Frequency Fusion Attack Against Object Detectors

  • Ziqi Zhou
  • Bowen Li
  • Yufei Song
  • Zhifei Yu
  • Shengshan Hu
  • Wei Wan
  • Leo Yu Zhang
  • Dezhong Yao

With the advancement of deep learning, object detectors (ODs) with various architectures have achieved significant success in complex scenarios like autonomous driving. Previous adversarial attacks against ODs have been focused on designing customized attacks targeting their specific structures (eg, NMS and RPN), yielding some results but simultaneously constraining their scalability. Moreover, most efforts against ODs stem from image-level attacks originally designed for classification tasks, resulting in redundant computations and disturbances in object-irrelevant areas (eg, background). Consequently, how to design a model-agnostic efficient attack to comprehensively evaluate the vulnerabilities of ODs remains challenging and unresolved. In this paper, we propose NumbOD, a brand-new spatial-frequency fusion attack against various ODs, aimed at disrupting object detection within images. We directly leverage the features output by the OD without relying on its any internal structures to craft adversarial examples. Specifically, we first design a dual-track attack target selection strategy to select high-quality bounding boxes from OD outputs for targeting. Subsequently, we employ directional perturbations to shift and compress predicted boxes and change classification results to deceive ODs. Additionally, we focus on manipulating the high-frequency components of images to confuse ODs' attention on critical objects, thereby enhancing the attack efficiency. Our extensive experiments on nine ODs and two datasets show that NumbOD achieves powerful attack performance and high stealthiness.

IJCAI Conference 2025 Conference Paper

Preference Identification by Interaction Overlap for Bundle Recommendation

  • Fei-Yao Liang
  • Wu-Dong Xi
  • Xing-Xing Xing
  • Wei Wan
  • Chang-Dong Wang
  • Hui-Yu Zhou

In the digital age, recommendation systems are crucial for enhancing user experiences, with bundle recommendations playing a key role by integrating complementary products. However, existing methods fail to accurately identify user preferences for specific items within bundles, making it difficult to design bundles containing more items of interest to users. Additionally, these methods do not leverage similar preferences among users of the same category, resulting in unstable and incomplete preference expressions. To address these issues, we propose Preference Identification by Interaction Overlap for Bundle Recommendation (PIIO). The data augmentation module analyzes the overlap between bundle-item inclusions and user-item interactions to calculate the interaction probability of non-interacted bundles, selecting the bundle with the highest probability as a positive sample to enrich user-bundle interactions and uncover user preferences for items within bundles. The preference aggregation module utilizes the overlap in user-item interactions to select similar users, aggregates preferences using an autoencoder, and constructs comprehensive preference profiles. The optimization module predicts user-bundle matching scores based on a user interest boundary loss function. The proposed PIIO model is applied to two bundle recommendation datasets, and experiments demonstrate the effectiveness of the PIIO model, surpassing state-of-the-art models.

IJCAI Conference 2024 Conference Paper

DarkFed: A Data-Free Backdoor Attack in Federated Learning

  • Minghui Li
  • Wei Wan
  • Yuxuan Ning
  • Shengshan Hu
  • Lulu Xue
  • Leo Yu Zhang
  • Yichen Wang

Federated learning (FL) has been demonstrated to be susceptible to backdoor attacks. However, existing academic studies on FL backdoor attacks rely on a high proportion of real clients with main task-related data, which is impractical. In the context of real-world industrial scenarios, even the simplest defense suffices to defend against the state-of-the-art attack, 3DFed. A practical FL backdoor attack remains in a nascent stage of development. To bridge this gap, we present DarkFed. Initially, we emulate a series of fake clients, thereby achieving the attacker proportion typical of academic research scenarios. Given that these emulated fake clients lack genuine training data, we further propose a data-free approach to backdoor FL. Specifically, we delve into the feasibility of injecting a backdoor using a shadow dataset. Our exploration reveals that impressive attack performance can be achieved, even when there is a substantial gap between the shadow dataset and the main task dataset. This holds true even when employing synthetic data devoid of any semantic information as the shadow dataset. Subsequently, we strategically construct a series of covert backdoor updates in an optimized manner, mimicking the properties of benign updates, to evade detection by defenses. A substantial body of empirical evidence validates the tangible effectiveness of DarkFed.

IJCAI Conference 2022 Conference Paper

Shielding Federated Learning: Robust Aggregation with Adaptive Client Selection

  • Wei Wan
  • Shengshan Hu
  • Jianrong Lu
  • Leo Yu Zhang
  • Hai Jin
  • Yuanyuan He

Federated learning (FL) enables multiple clients to collaboratively train an accurate global model while protecting clients' data privacy. However, FL is susceptible to Byzantine attacks from malicious participants. Although the problem has gained significant attention, existing defenses have several flaws: the server irrationally chooses malicious clients for aggregation even after they have been detected in previous rounds; the defenses perform ineffectively against sybil attacks or in the heterogeneous data setting. To overcome these issues, we propose MAB-RFL, a new method for robust aggregation in FL. By modelling the client selection as an extended multi-armed bandit (MAB) problem, we propose an adaptive client selection strategy to choose honest clients that are more likely to contribute high-quality updates. We then propose two approaches to identify malicious updates from sybil and non-sybil attacks, based on which rewards for each client selection decision can be accurately evaluated to discourage malicious behaviors. MAB-RFL achieves a satisfying balance between exploration and exploitation on the potential benign clients. Extensive experimental results show that MAB-RFL outperforms existing defenses in three attack scenarios under different percentages of attackers.

AAMAS Conference 2010 Conference Paper

Symbolic Model Checking for Agent Interactions

  • Mohamed El-Menshawy
  • Wei Wan
  • Jamal Bentahar
  • Rachida Dssouli

In this paper, we address the issue of the specification andverification of commitment protocols having a social semantics. We begin with developing a new language to formallyspecify these protocols and desirable properties by enhancing $CTL^*$ logic with modalities of commitments and actions on these commitments. We also present a symbolicmodel checking algorithm for commitments and their actions based on OBDDs. Finally, we present an implementation and experimental results of the proposed protocol usingthe NuSMV and MCMAS symbolic model checkers.