Arrow Research search
Back to ICML

ICML 2025

Assessing Safety Risks and Quantization-aware Safety Patching for Quantized Large Language Models

Conference Paper Accept (poster) Artificial Intelligence ยท Machine Learning

Abstract

Quantized large language models (LLMs) have gained increasing attention and significance for enabling deployment in resource-constrained environments. However, emerging studies on a few calibration dataset-free quantization methods suggest that quantization may compromise the safety capabilities of LLMs, underscoring the urgent need for systematic safety evaluations and effective mitigation strategies. In this paper, we present comprehensive safety evaluations across various mainstream quantization techniques and diverse calibration datasets, utilizing widely accepted safety benchmarks. To address the identified safety vulnerabilities, we propose a quantization-aware safety patching framework, Q-resafe, to efficiently restore the safety capabilities of quantized LLMs while minimizing any adverse impact on utility. Extensive experiment results demonstrate that Q-resafe successfully re-aligns the safety of quantized LLMs with their pre-quantization counterparts, even under challenging evaluation scenarios. Project page: https: //github. com/Thecommonirin/Qresafe.

Authors

Keywords

  • Large Language Model
  • Preference Alignment
  • Safety Evaluation

Context

Venue
International Conference on Machine Learning
Archive span
1993-2025
Indexed papers
16471
Paper id
827939507969849426