AAAI 2026
Value-Driven Memory-Augmented Generation for Agentic LLMs: Towards Structured and Adaptive Knowledge Utilization
Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities in reasoning, yet their efficacy is constrained by a fundamental memory limitation: a static context window that resets with each interaction. This prevents them from accumulating experience and adapting to dynamic, long-term tasks. To address the limitations of long-term memory in agentic LLMs, this work introduces a neuro-inspired framework with two key contributions. First, we propose \textbf{ARTEM} (Agentic Retrieval with Temporal-Episodic Memory), a system that organizes experiences into structured events and manages utility-based memory consolidation. Second, we extend this framework with a distinct governance component, \textbf{Value-driven ARTEM}, that validates candidate outputs against core principles before finalization. Together, these components equip LLM agents with continual learning, adaptive reasoning, and robust value-aligned decision-making. Looking forward, we outline future directions including dynamic memory adaptation, memory decay mechanisms, and applications in interactive multi-agent environments.
Authors
Keywords
No keywords are indexed for this paper.
Context
- Venue
- AAAI Conference on Artificial Intelligence
- Archive span
- 1980-2026
- Indexed papers
- 28718
- Paper id
- 306253810595439324