Arrow Research search

Author name cluster

Fangjun Huang

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
2 author rows

Possible papers

4

AAAI Conference 2025 Conference Paper

OmniMark: Efficient and Scalable Latent Diffusion Model Fingerprinting

  • Jianwei Fei
  • Yunshu Dai
  • Zhihua Xia
  • Fangjun Huang
  • Jiantao Zhou

We introduce OmniMark, a novel and efficient fingerprinting method for Latent Diffusion Models (LDM). OmniMark can encode user-specific fingerprints across diverse dimensions of the weights of the LDM, including kernels, filters, channels, and spatial domains. The LDM is fine-tuned to encode the invisible fingerprint into generated images, which can be decoded by a decoder. By altering fingerprints and re-encoding the weights, OmniMark supports efficient and scalable ad-hoc generation (<100 ms) of numerous models with unique fingerprints that enable user accountability and model attribution. Extensive experiments demonstrate that OmniMark can be applied to various image generation and editing tasks and achieve highly accurate fingerprint detection without compromising image quality. Furthermore, OmniMark demonstrates good robustness against both white-box model attacks and image attacks, including fine-tuning and JPEG compression.

ICML Conference 2025 Conference Paper

Robust Secure Swap: Responsible Face Swap With Persons of Interest Redaction and Provenance Traceability

  • Yunshu Dai
  • Jianwei Fei
  • Fangjun Huang
  • Chip Hong Chang

As AI generative models evolve, face swap technology has become increasingly accessible, raising concerns over potential misuse. Celebrities may be manipulated without consent, and ordinary individuals may fall victim to identity fraud. To address these threats, we propose Secure Swap, a method that protects persons of interest (POI) from face-swapping abuse and embeds a unique, invisible watermark into nonPOI swapped images for traceability. By introducing an ID Passport layer, Secure Swap redacts POI faces and generates watermarked outputs for nonPOI. A detachable watermark encoder and decoder are trained with the model to ensure provenance tracing. Experimental results demonstrate that Secure Swap not only preserves face swap functionality but also effectively prevents unauthorized swaps of POI and detects different embedded model’s watermarks with high accuracy. Specifically, our method achieves a 100% success rate in protecting POI and over 99% watermark extraction accuracy for nonPOI. Besides fidelity and effectiveness, the robustness of protected models against image-level and model-level attacks in both online and offline application scenarios is also experimentally demonstrated.

ICML Conference 2025 Conference Paper

Variance as a Catalyst: Efficient and Transferable Semantic Erasure Adversarial Attack for Customized Diffusion Models

  • Jiachen Yang
  • Yusong Wang
  • Yanmei Fang
  • Yunshu Dai
  • Fangjun Huang

Latent Diffusion Models (LDMs) enable fine-tuning with only a few images and have become widely used on the Internet. However, it can also be misused to generate fake images, leading to privacy violations and social risks. Existing adversarial attack methods primarily introduce noise distortions to generated images but fail to completely erase identity semantics. In this work, we identify the variance of VAE latent code as a key factor that influences image distortion. Specifically, larger variances result in stronger distortions and ultimately erase semantic information. Based on this finding, we propose a Laplace-based (LA) loss function that optimizes along the fastest variance growth direction, ensuring each optimization step is locally optimal. Additionally, we analyze the limitations of existing methods and reveal that their loss functions often fail to align gradient signs with the direction of variance growth. They also struggle to ensure efficient optimization under different variance distributions. To address these issues, we further propose a novel Lagrange Entropy-based (LE) loss function. Experimental results demonstrate that our methods achieve state-of-the-art performance on CelebA-HQ and VGGFace2. Both proposed loss functions effectively lead diffusion models to generate pure-noise images with identity semantics completely erased. Furthermore, our methods exhibit strong transferability across diverse models and efficiently complete attacks with minimal computational resources. Our work provides a practical and efficient solution for privacy protection.

AAAI Conference 2021 Conference Paper

PID-Based Approach to Adversarial Attacks

  • Chen Wan
  • Biaohua Ye
  • Fangjun Huang

Adversarial attack can misguide the deep neural networks (DNNs) with adding small-magnitude perturbations to normal examples, which is mainly determined by the gradient of the loss function with respect to inputs. Previously, various strategies have been proposed to enhance the performance of adversarial attacks. However, all these methods only utilize the gradients in the present and past to generate adversarial examples. Until now, the trend of gradient change in the future (i. e. , the derivative of gradient) has not been considered yet. Inspired by the classic proportional-integral-derivative (PID) controller in the field of automatic control, we propose a new PID-based approach for generating adversarial examples. The gradients in the present and past, and the derivative of gradient are considered in our method, which correspond to the components of P, I and D in the PID controller, respectively. Extensive experiments consistently demonstrate that our method can achieve higher attack success rates and exhibit better transferability compared with the state-of-the-art gradient-based adversarial attacks. Furthermore, our method possesses good extensibility and can be applied to almost all available gradient-based adversarial attacks.