Arrow Research search
Back to FLAP

FLAP 2025

Reasoning Alignment for Agentic AI: Argumentation, Belief Revision, and Dialogue

Journal Article Number 6 Logic in Computer Science

Abstract

Agentic AI—deployed as technical systems that perceive, decide, and act via tools—faces requirements of safety, accountability, controlled adaptivity, and compositionality. We develop Reasoning Alignment Diagrams (RADs), com- mutative reasoning representations that align a source specification with an argumentation-based explanation route. As illustrative examples, we first show that full-meet belief base revision admits an exact representation within base argumentation via a restricted-attack construction: the revised base equals the intersection of the premises appearing in all stable extensions of the modified framework. This yields a RAD from input to sanctioned output that doubles as an explanation engine. We then compose the “listen” (revision) and “assert” (inference) RADs to model dialogue among agents, enabling explainable and auditable autonomy. Although our results are entirely symbolic, the RAD tem- plate can serve as a specification layer even when other components are opaque or learned. The approach realizes core themes of Gabbay’s programme—logic as a toolbox, combining logics, and argumentation as a host formalism—and supports a principle-based analysis of correctness, transparency, and modular- ity.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
IfCoLog Journal of Logics and their Applications
Archive span
2014-2026
Indexed papers
633
Paper id
364136694373200840