Arrow Research search

Author name cluster

Chuanxi Chen

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

JBHI Journal 2025 Journal Article

CT-DCENet: Deep EEG Denoising via CNN-Transformer-Based Dual-Stage Collaborative Ensemble Learning

  • Yunbo Tang
  • Weirong Huang
  • Chuanxi Chen
  • Dan Chen

Electroencephalogram (EEG) artifact removal has been investigated for decades with the goal of reconstructing the clean signals for the subsequent EEG analysis. However, existing denoising methods still have limited capabilities to handle the highly mixed artifacts and the fine-grained temporal dependency of artifact-free EEG without a priori knowledge of the artifacts. To address the challenges, this study proposes a CNN-Transformer-based dual-stage collaborative ensemble learning framework (namely CT-DCENet) in the form of three modules: 1) randomized collaboration module initially utilizes four individual learners to reveal multi-group morphological characteristics of the denoised EEG, 2) linear ensemble module integrates the outputs of four individual learners via weighted linear combination to preliminarily estimate the denoised EEG, 3) information complementation module takes in the residual between the contaminated EEG and the above estimated EEG, and critically applies CNN-Transformer-based feature extractor and denoising head to learn the detailed characteristics of the denoised EEG. CT-DCENet is conducted in a dual-stage training manner to derive the morphological characteristics & the detailed characteristics of the artifact-free EEG successively. The experimental results on the public EEG datasets indicate that 1) CT-DCENet significantly outperforms the state-of-the-art counterparts (e. g. , DuoCL, GCTNet) under the conditions of various artifacts and noise intensities, where the increases of SNR & PCC are 0. 79 dB, 0. 6% and the decrease of RRMSE is 1. 9% for the removal of EMG, ECG, EOG mixed artifacts, 2) the reconstructed EEG by CT-DCENet can well fit the clean EEG with a low error achieved, especially for the peak amplitude, the high-frequency area and the boundary area of the EEG waveform, providing promising EEG data for the downstream task-oriented EEG analysis.

AAAI Conference 2024 Conference Paper

Once and for All: Universal Transferable Adversarial Perturbation against Deep Hashing-Based Facial Image Retrieval

  • Long Tang
  • Dengpan Ye
  • Yunna Lv
  • Chuanxi Chen
  • Yunming Zhang

Deep Hashing (DH)-based image retrieval has been widely applied to face-matching systems due to its accuracy and efficiency. However, this convenience comes with an increased risk of privacy leakage. DH models inherit the vulnerability to adversarial attacks, which can be used to prevent the retrieval of private images. Existing adversarial attacks against DH typically target a single image or a specific class of images, lacking universal adversarial perturbation for the entire hash dataset. In this paper, we propose the first universal transferable adversarial perturbation against DH-based facial image retrieval, a single perturbation can protect all images. Specifically, we explore the relationship between clusters learned by different DH models and define the optimization objective of universal perturbation as leaving from the overall hash center. To mitigate the challenge of single-objective optimization, we randomly obtain sub-cluster centers and further propose sub-task-based meta-learning to aid in overall optimization. We test our method with popular facial datasets and DH models, indicating impressive cross-image, -identity, -model, and -scheme universal anti-retrieval performance. Compared to state-of-the-art methods, our performance is competitive in white-box settings and exhibits significant improvements of 10%-70% in transferability in all black-box settings.

IJCAI Conference 2023 Conference Paper

Voice Guard: Protecting Voice Privacy with Strong and Imperceptible Adversarial Perturbation in the Time Domain

  • Jingyang Li
  • Dengpan Ye
  • Long Tang
  • Chuanxi Chen
  • Shengshan Hu

Adversarial example is a rising tool for voice privacy protection. By adding imperceptible noise to public audio, it prevents tampers from using zero-shot Voice Conversion (VC) to synthesize high quality speech with target speaker identity. However, many existing studies ignore the human perception characteristics of audio data, and it is challenging to generate strong and imperceptible adversarial audio. In this paper, we propose the Voice Guard defense method, which uses a novel method to advance the adversarial perturbation to the time domain to avoid the loss caused by cross-domain conversion. And the psychoacoustic model is introduced into the defense of VC for the first time, which greatly improves the disruption ability and concealment of adversarial audio. We also standardize the evaluation metrics of adversarial audio for the first time, combining multi-dimensional metrics to define the criteria for defense. We evaluate Voice Guard on several state-of-the-art zero-shot VC models. The experimental results show that our method can ensure the perceptual quality of adversarial audio while having a strong defense capability, and is far superior to previous works in terms of disruption ability and concealment.