Arrow Research search
Back to AAAI

AAAI 2024

SAME: Sample Reconstruction against Model Extraction Attacks

Conference Paper AAAI Technical Track on Philosophy and Ethics of AI Artificial Intelligence

Abstract

While deep learning models have shown significant performance across various domains, their deployment needs extensive resources and advanced computing infrastructure. As a solution, Machine Learning as a Service (MLaaS) has emerged, lowering the barriers for users to release or productize their deep learning models. However, previous studies have highlighted potential privacy and security concerns associated with MLaaS, and one primary threat is model extraction attacks. To address this, there are many defense solutions but they suffer from unrealistic assumptions and generalization issues, making them less practical for reliable protection. Driven by these limitations, we introduce a novel defense mechanism, SAME, based on the concept of sample reconstruction. This strategy imposes minimal prerequisites on the defender's capabilities, eliminating the need for auxiliary Out-of-Distribution (OOD) datasets, user query history, white-box model access, and additional intervention during model training. It is compatible with existing active defense methods. Our extensive experiments corroborate the superior efficacy of SAME over state-of-the-art solutions. Our code is available at https://github.com/xythink/SAME.

Authors

Keywords

  • CV: Adversarial Attacks & Robustness
  • CV: Bias, Fairness & Privacy
  • ML: Privacy
  • PEAI: Privacy & Security
  • PEAI: Safety, Robustness & Trustworthiness

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
966087827888203648