TIST Journal 2026 Journal Article
From Hallucination to Certainty: Meta-Knowledge Guided Self-Correcting Large Language Models
- Wei Zhang
- Guojun Dai
- Ding Luo
- Yan Wang
- Chen Ye
Recent advancements in Large Language Models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation. To further enhance their factual grounding and reasoning fidelity, integrating LLMs with Knowledge Graphs (KGs) has emerged as a promising direction. Significant progress has been made in leveraging KGs to augment LLM reasoning through methods like Retrieval-Augmented Generation. However, effectively harnessing the synergy between LLMs and KGs for robust and reliable reasoning still presents critical challenges. Specifically: (1) LLMs struggle to effectively interpret and utilize the structured nature of KGs, due to the discrepancy between their text-based training and KG's symbolic representations; (2) querying and reasoning over structured knowledge in KGs remains inefficient for LLMs, hindering complex inference. To address these limitations, we introduce Meta-Knowledge enhanced Knowledge Graph (MKG), a novel framework that empowers LLMs to effectively leverage structured knowledge from KGs. MKG employs Meta-Knowledge, stored in a multi-store memory with a Self-Correcting Mechanism, to guide LLMs in KG retrieval and reasoning. Our experimental evaluations on complex question answering benchmarks demonstrate that MKG achieves significant performance gains, outperforms the baseline Original LLM, Retrieval-Augmented Generation (RAG), ReAct, GraphRAG and ToG frameworks by 25%, 17%, 11%, 3.3% and 2.6%, respectively.