IS Journal 2026 Journal Article
Pavlov’s Dog and Large Language Models: The Double-Edged Power of Context Conditioning
- Denghui Zhang
- Rushi Wang
- Jiateng Liu
- Kezia Oketch
- Yiyu Shi
- Heng Ji
- Ahmed Abbasi
We introduce context conditioning, a phenomenon analogous to Pavlovian learning, in which large language models (LLMs) display heightened sensitivity to small amounts of novel contextual signals. This conditioning is double-edged. Carefully curated contexts can quickly steer models toward trustworthy, inclusive behavior, while minor malicious or biased signals can provoke unsafe, toxic, or privacy-compromising responses. We reveal this double-edged behavior with two studies that collectively highlight the underlying associative amplification mechanism through which novel or low-frequency contextual cues exert outsized influence on model attention and response distributions. Trust in context-based artificial intelligence (AI) thus depends not only on model design but also on how context governs behavior at inference time. We outline five research directions for building trustworthy context-based LLM systems and argue that the future of responsible AI lies not only in safer models but in safer contexts, meaning systems that understand, audit, and adapt to the stimuli that condition them.