Arrow Research search
Back to AAAI

AAAI 2026

Controllable Epistemic Sensitivity in Large Language Models: Probing, Benchmarking, and Adaptive Reasoning

Short Paper AAAI Undergraduate Consortium Artificial Intelligence

Abstract

This proposal aims to investigate epistemic uncertainty - uncertainty about knowledge or truth, often conveyed by modals like might or probably in Large Language Models (LLMs). By probing how such cues affect reasoning, we seek to achieve controllable epistemic sensitivity: enabling mod- els to interpret and adapt to uncertainty. Using activation- level analyses and multilingual benchmarks, this work ad- vances transparent, context-aware, and trustworthy reasoning in uncertainty-critical domains.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
274659984888708976