Arrow Research search

Author name cluster

Tu Ao

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

AAAI Conference 2025 Conference Paper

LightPROF: A Lightweight Reasoning Framework for Large Language Model on Knowledge Graph

  • Tu Ao
  • Yanhua Yu
  • Yuling Wang
  • Yang Deng
  • Zirui Guo
  • Liang Pang
  • Pinghui Wang
  • Tat-Seng Chua

Large Language Models (LLMs) have impressive capabilities in text understanding and zero-shot reasoning. However, delays in knowledge updates may cause them to reason incorrectly or produce harmful results. Knowledge Graphs (KGs) provide rich and reliable contextual information for the reasoning process of LLMs by structurally organizing and connecting a wide range of entities and relations. Existing KG-based LLM reasoning methods only inject KGs' knowledge into prompts in a textual form, ignoring its structural information. Moreover, they mostly rely on close-source models or open-source models with large parameters, which poses challenges to high resource consumption. To address this, we propose a novel Lightweight and efficient Prompt learning-ReasOning Framework for KGQA (LightPROF), which leverages the full potential of LLMs to tackle complex reasoning tasks in a parameter-efficient manner. Specifically, LightPROF follows a “Retrieve-Embed-Reason” process, first accurately, and stably retrieving the corresponding reasoning graph from the KG through retrieval module. Next, through a Transformer-based Knowledge Adapter, it finely extracts and integrates factual and structural information from the KG, then maps this information to the LLM’s token embedding space, creating an LLM-friendly prompt to be used by the LLM for the final reasoning. Additionally, LightPROF only requires training Knowledge Adapter and can be compatible with any open-source LLM. Extensive experiments on two public KGQA benchmarks demonstrate that LightPROF achieves superior performance with small-scale LLMs. Furthermore, LightPROF shows significant advantages in terms of input token count and reasoning time.

AAAI Conference 2025 Conference Paper

LS-TGNN: Long and Short-Term Temporal Graph Neural Network for Session-Based Recommendation

  • Zhonghong Ou
  • Xiao Zhang
  • Yifan Zhu
  • Shuai Lyu
  • Jiahao Liu
  • Tu Ao

Session-Based Recommendation (SBR) based on Graph Neural Networks (GNN) has become a new paradigm for recommender systems, and plays a fundamental role in e-commerce and other relevant domains. Existing graph aggregation methods primarily form node representations by capturing basic relationships between neighboring and central nodes. Despite their encouraging results, the global relationships of items and user intentions within sessions typically change over time, which degrades the effectiveness of existing embedding schemes. To resolve this challenge, we propose a Long and Short-Term Temporal Graph Neural Network (LS-TGNN) for SBR. LS-TGNN employs a novel temporal session graph to aggregate neighborhood information, and models user interests from both long and short-term perspectives. Specifically, we design long-term and short-term encoders to model the long and short-term interests of users, respectively. In order to better model the interests of users in different time dimensions, we introduce an item-granularity method that distinguishes between long and short-term interests. Extensive experiments on three widely used datasets demonstrate that LS-TGNN outperforms existing methods with a large margin.