Arrow Research search

Author name cluster

Honglong Chen

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
2 author rows

Possible papers

5

AAAI Conference 2024 Conference Paper

A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives

  • Yudong Gao
  • Honglong Chen
  • Peng Sun
  • Junjian Li
  • Anqing Zhang
  • Zhibo Wang
  • Weifeng Liu

Backdoor attacks pose serious security threats to deep neural networks (DNNs). Backdoored models make arbitrarily (targeted) incorrect predictions on inputs containing well-designed triggers, while behaving normally on clean inputs. Prior researches have explored the invisibility of backdoor triggers to enhance attack stealthiness. However, most of them only focus on the invisibility in the spatial domain, neglecting the generation of invisible triggers in the frequency domain. This limitation renders the generated poisoned images easily detectable by recent defense methods. To address this issue, we propose a DUal stealthy BAckdoor attack method named DUBA, which simultaneously considers the invisibility of triggers in both the spatial and frequency domains, to achieve desirable attack performance, while ensuring strong stealthiness. Specifically, we first use Wavelet Transform to embed the high-frequency information of the trigger image into the clean image to ensure attack effectiveness. Then, to attain strong stealthiness, we incorporate Fourier Transform and Cosine Transform to mix the poisoned image and clean image in the frequency domain. Moreover, DUBA adopts a novel attack strategy, training the model with weak triggers and attacking with strong triggers to further enhance attack performance and stealthiness. DUBA is evaluated extensively on four datasets against popular image classifiers, showing significant superiority over state-of-the-art backdoor attacks in attack success rate and stealthiness.

ICML Conference 2024 Conference Paper

Energy-based Backdoor Defense without Task-Specific Samples and Model Retraining

  • Yudong Gao
  • Honglong Chen
  • Peng Sun 0003
  • Zhe Li 0026
  • Junjian Li
  • Huajie Shao

Backdoor defense is crucial to ensure the safety and robustness of machine learning models when under attack. However, most existing methods specialize in either the detection or removal of backdoors, but seldom both. While few works have addressed both, these methods rely on strong assumptions or entail significant overhead costs, such as the need of task-specific samples for detection and model retraining for removal. Hence, the key challenge is how to reduce overhead and relax unrealistic assumptions. In this work, we propose two Energy-Based BAckdoor defense methods, called EBBA and EBBA+, that can achieve both backdoored model detection and backdoor removal with low overhead. Our contributions are twofold: First, we offer theoretical analysis for our observation that a predefined target label is more likely to occur among the top results for various samples. Inspired by this, we develop an enhanced energy-based technique, called EBBA, to detect backdoored models without task-specific samples (i. e. , samples from any tasks). Secondly, we theoretically analyze that after data corruption, the original clean label of a poisoned sample is more likely to be predicted as a top output by the model, a sharp contrast to clean samples. Accordingly, we extend EBBA to develop EBBA+, a new transferred energy approach to efficiently detect poisoned images and remove backdoors without model retraining. Extensive experiments on multiple benchmark datasets demonstrate the superior performance of our methods over baselines in both backdoor detection and removal. Notably, the proposed methods can effectively detect backdoored model and poisoned images as well as remove backdoors at the same time.

TAAS Journal 2024 Journal Article

IBAQ: Frequency-Domain Backdoor Attack Threatening Autonomous Driving via Quadratic Phase

  • Jinghan Qiu
  • Honglong Chen
  • Junjian Li
  • Yudong Gao
  • Junwei Li
  • Xingang Wang

The rapid evolution of backdoor attacks has emerged as a significant threat to the security of autonomous driving models. An attacker injects a backdoor into the model by adding triggers to the samples, which can be activated to manipulate the model’s inference. Backdoor attacks can lead to severe consequences, such as misidentifying traffic signs during autonomous driving, posing a risk of causing traffic accidents. Recently, there has been a gradual evolution of frequency-domain backdoor attacks. However, since the change of both amplitude and its corresponding phase will significantly affect image appearance, most of the existing frequency-domain backdoor attacks change only the amplitude, which results in a suboptimal efficacy of the attack. In this work, we propose an attack called IBAQ, to solve this problem by blurring semantic information of the trigger image through the quadratic phase. Initially, we convert the trigger and benign sample to YCrCb space. Then, we perform the fast Fourier transform on the Y channel, blending the trigger image’s amplitude and quadratic phase linearly with the benign sample’s amplitude and phase. IBAQ achieves covert injection of trigger information within amplitude and phase, enhancing the attack effect. We validate the effectiveness and stealthiness of IBAQ through comprehensive experiments.

IJCAI Conference 2023 Conference Paper

Annealing Genetic-based Preposition Substitution for Text Rubbish Example Generation

  • Chen Li
  • Xinghao Yang
  • Baodi Liu
  • Weifeng Liu
  • Honglong Chen

Modern Natural Language Processing (NLP) models expose under-sensitivity towards text rubbish examples. The text rubbish example is the heavily modified input text which is nonsensical to humans but does not change the model’s prediction. Prior work crafts rubbish examples by iteratively deleting words and determining the deletion order with beam search. However, the produced rubbish examples usually cause a reduction in model confidence and sometimes deliver human-readable text. To address these problems, we propose an Annealing Genetic based Preposition Substitution (AGPS) algorithm for text rubbish sample generation with two major merits. Firstly, the AGPS crafts rubbish text examples by substituting input words with meaningless prepositions instead of directly removing them, which brings less degradation to the model’s confidence. Secondly, we design an Annealing Genetic algorithm to optimize the word replacement priority, which allows the Genetic Algorithm (GA) to jump out the local optima with probabilities. This is significant in achieving better objectives, i. e. , a high word modification rate and a high model confidence. Experimental results on five popular datasets manifest the superiority of AGPS compared with the baseline and expose the fact: the NLP models can not really understand the semantics of sentences, as they give the same prediction with even higher confidence for the nonsensical preposition sequences.