Arrow Research search
Back to AAAI

AAAI 2026

Empowering LLMs with Symbolic Representation and Reasoning

Short Paper AAAI Doctoral Consortium Track Artificial Intelligence

Abstract

Large language models (LLMs) have achieved remarkable success in natural language processing tasks but still struggle with complex causal and logical reasoning. Previous neuro-symbolic methods can be summarized into a two-stage framework: first translating natural language (NL) problems into symbolic language (SL) representation, and then performing the symbolic reasoning process. To facilitate this direction, we provide a comprehensive survey, summarizing two main challenges including complex logical question-answering (QA) and cross-question logical consistency, and further propose a new taxonomy. To achieve precise symbolic representation and enhance the accuracy of LLMs’ logical reasoning, we propose several effective and efficient approaches, including adaptively selecting the most suitable SL for each QA problem, a data-driven approach to determine the fine-tuning samples order, and an efficient multi-agent debate framework with sparse communication. Our future research will focus on theoretical analysis for optimal SL selection, translation refinement and robust neuro-symbolic approach to improve LLMs' reasoning.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
553491160554190360