Arrow Research search
Back to NeurIPS

NeurIPS 2025

Why Do Some Language Models Fake Alignment While Others Don't?

Conference Paper Main Conference Track Artificial Intelligence ยท Machine Learning

Abstract

Alignment Faking in Large Language Models presented a demonstration of Claude 3 Opus and Claude 3. 5 Sonnet selectively complying with a helpful-only training objective to prevent modification of their behavior outside of training. We expand this analysis to 25 models and find that only 5 (Claude 3 Opus, Claude 3. 5 Sonnet, Llama 3 405B, Grok 3, Gemini 2. 0 Flash) comply with harmful queries more when they infer they are in training than when they infer they are in deployment. First, we study the motivations of these 5 models. Results from perturbing details of the scenario suggest that only Claude 3 Opus's compliance gap is primarily and consistently motivated by trying to keep its goals. Second, we investigate why many chat models don't fake alignment. Our results suggest this is not entirely due to a lack of capabilities: many base models fake alignment some of the time, and post-training eliminates alignment-faking for some models and amplifies it for others. We investigate 5 hypotheses for how post-training may suppress alignment faking and find that variations in refusal behavior may account for a significant portion of differences in alignment faking.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
Annual Conference on Neural Information Processing Systems
Archive span
1987-2025
Indexed papers
30776
Paper id
196800484773636934