Arrow Research search

Author name cluster

Nan Hu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
1 author row

Possible papers

5

AAAI Conference 2026 Conference Paper

TaxReasoning: Benchmarking Knowledge-Intensive Mathematical Reasoning with Evolving Tax Laws

  • Nan Hu
  • Yike Wu
  • Jiaye Li
  • HuiKang Hu
  • Guilin Qi
  • Songlin Zhai
  • Yongrui Chen
  • Tianxing Wu

Recent studies have explored the capabilities of large language models (LLMs) in solving knowledge-intensive mathematical reasoning problems. However, existing benchmarks predominantly involve static theorems that LLMs have encountered during pretraining, failing to assess dynamic knowledge integration. In this work, we introduce TaxReasoning, a novel benchmark designed to evaluate LLMs’ abilities in real-world tax calculation scenarios. These tasks require not only mathematical reasoning and numerical computation, but also the extraction and application of complex, frequently updated tax regulations. Through extensive experiments with state-of-the-art LLMs using diverse prompting strategies and knowledge augmentation techniques, we uncover substantial limitations in their ability to handle dynamic, knowledge-intensive questions—primarily due to missing domain-specific knowledge and ineffective retrieval. Even the best-performing models fall significantly short of human-level performance. Our analysis points to key avenues for improvement, including enhancing LLMs' reasoning capabilities, developing more effective knowledge summarization techniques, and improving retrieval strategies. TaxReasoning offers a critical testbed for advancing LLMs in dynamic knowledge-intensive domains.

AAAI Conference 2025 Conference Paper

HeGTa: Leveraging Heterogeneous Graph-enhanced Large Language Models for Few-shot Complex Table Understanding

  • Rihui Jin
  • Yu Li
  • Guilin Qi
  • Nan Hu
  • Yuan-Fang Li
  • Jiaoyan Chen
  • Jianan Wang
  • Yongrui Chen

Table Understanding (TU) has achieved promising advancements, but it faces the challenges of the scarcity of manually labeled tables and the presence of complex table structures. To address these challenges, we propose HeGTa, a heterogeneous graph (HG)-enhanced large language model (LLM) designed for few-shot TU tasks. This framework aligns structural table semantics with the LLM's parametric knowledge through soft prompts and instruction tuning. It also addresses complex tables with a multi-task pre-training scheme, incorporating three novel multi-granularity self-supervised HG pre-text tasks. We empirically demonstrate the effectiveness of HeGTa, showing that it outperforms the SOTA for few-shot complex TU on several benchmarks.

JBHI Journal 2024 Journal Article

Hybrid Brain-Computer Interface Controlled Soft Robotic Glove for Stroke Rehabilitation

  • Ruoqing Zhang
  • Shanshan Feng
  • Nan Hu
  • Shunkang Low
  • Meng Li
  • Xiaogang Chen
  • Hongyan Cui

Soft robotic glove controlled by a brain-computer interface (BCI) have demonstrated effectiveness in hand rehabilitation for stroke patients. Current systems rely on static visual representations for patients to perform motor imagination (MI) tasks, resulting in lower BCI performance. Therefore, this study innovatively used MI and high-frequency steady-state visual evoked potential (SSVEP) to construct a friendly and natural hybrid BCI paradigm. Specifically, the stimulation interface sequentially presented decomposed action pictures of the left and right hands gripping a ball, with the pictures flashing at specific stimulation frequencies (left: 34 Hz, right: 35 Hz). Integrating soft robotic glove as feedback, we established a comprehensive “peripheral - central - peripheral” hand rehabilitation system to facilitate the hand rehabilitation of patients. Filter bank common spatial pattern (FBCSP) and filter bank canonical correlation analysis (FBCCA) algorithms were used to identify MI and SSVEP signals, respectively. Additionally, we proposed a novel fusion algorithm to decide the final output of the system. The feasibility of the proposed system was validated through online experiments involving 12 healthy subjects and 9 stroke patients, achieving accuracy rates of 95. 83 ± 6. 83% and 63. 33 ± 10. 38, respectively. The accuracy of MI and SSVEP in 12 healthy subjects reached 81. 67 ± 15. 63% and 95. 14 ± 7. 47%, both lower than the accuracy after fusion, these results confirmed the effectiveness of the proposed algorithm. The accuracy rate was more than 50% in both healthy subjects and patients, confirming the effectiveness of the proposed system.

AAAI Conference 2022 Conference Paper

DKPLM: Decomposable Knowledge-Enhanced Pre-trained Language Model for Natural Language Understanding

  • Taolin Zhang
  • Chengyu Wang
  • Nan Hu
  • Minghui Qiu
  • Chengguang Tang
  • Xiaofeng He
  • Jun Huang

Knowledge-Enhanced Pre-trained Language Models (KE- PLMs) are pre-trained models with relation triples injecting from knowledge graphs to improve language understanding abilities. Experiments show that our model outperforms other KEPLMs significantly over zero-shot knowledge probing tasks and multiple knowledge-aware language understanding tasks. To guarantee effective knowledge injection, previous studies integrate models with knowledge encoders for representing knowledge retrieved from knowledge graphs. The operations for knowledge retrieval and encoding bring significant computational burdens, restricting the usage of such models in real-world applications that require high inference speed. In this paper, we propose a novel KEPLM named DKPLM that decomposes knowledge injection process of the pre-trained language models in pre-training, fine-tuning and inference stages, which facilitates the applications of KEPLMs in realworld scenarios. Specifically, we first detect knowledge-aware long-tail entities as the target for knowledge injection, enhancing the KEPLMs’ semantic understanding abilities and avoiding injecting redundant information. The embeddings of long-tail entities are replaced by “pseudo token representations” formed by relevant knowledge triples. We further design the relational knowledge decoding task for pre-training to force the models to truly understand the injected knowledge by relation triple reconstruction. Experiments show that our model outperforms other KEPLMs significantly over zeroshot knowledge probing tasks and multiple knowledge-aware language understanding tasks. We further show that DKPLM has a higher inference speed than other competing models due to the decomposing mechanism.

YNICL Journal 2021 Journal Article

PET evidence of preclinical cerebellar amyloid plaque deposition in autosomal dominant Alzheimer’s disease-causing Presenilin-1 E280A mutation carriers

  • Valentina Ghisays
  • Francisco Lopera
  • Dhruman D. Goradia
  • Hillary D. Protas
  • Michael H. Malek-Ahmadi
  • Yinghua Chen
  • Vivek Devadas
  • Ji Luo

BACKGROUND: In contrast to sporadic Alzheimer's disease, autosomal dominant Alzheimer's disease (ADAD) is associated with greater neuropathological evidence of cerebellar amyloid plaque (Aβ) deposition. In this study, we used positron emission tomography (PET) measurements of fibrillar Aβ burden to characterize the presence and age at onset of cerebellar Aβ deposition in cognitively unimpaired (CU) Presenilin-1 (PSEN1) E280A mutation carriers from the world's largest extended family with ADAD. METHODS: C Pittsburgh compound B (PiB) PET data from two independent studies - API ADAD Colombia Trial (NCT01998841) and Colombia-Boston (COLBOS) longitudinal biomarker study were included. The tracers were selected independently by the respective sponsors prior to the start of each study and used exclusively throughout. Template-based cerebellar Aβ-SUVR (standard-uptake value ratios) using a known-to-be-spared pons reference region (cerebellar SUVR_pons), to a) compare 28-56-year-old CU carriers and non-carriers; b) estimate the age at which cerebellar SUVR_pons began to differ significantly in carrier and non-carrier groups; and c) characterize in carriers associations with age, cortical SUVR_pons, delayed recall memory, and API ADAD composite score. RESULTS: Florbetapir and PiB cerebellar SUVR_pons were significantly higher in carriers than non-carriers (p < 0.0001). Cerebellar SUVR_pons began to distinguish carriers from non-carriers at age 34, 10 years before the carriers' estimated age at mild cognitive impairment onset. Florbetapir and PiB cerebellar SUVR_pons in carriers were positively correlated with age (r = 0.44 & 0.69, p < 0.001), cortical SUVR_pons (r = 0.55 & 0.69, p < 0.001), and negatively correlated with delayed recall memory (r = -0.21 & -0.50, p < 0.05, unadjusted for cortical SUVR_pons) and API ADAD composite (r = -0.25, p < 0.01, unadjusted for cortical SUVR_pons in florbetapir API ADAD cohort). CONCLUSION: This PET study provides evidence of cerebellar Aβ plaque deposition in CU carriers starting about a decade before the clinical onset of ADAD. Additional studies are needed to clarify the impact of using a cerebellar versus pons reference region on the power to detect and track ADAD changes, even in preclinical stages of this disorder.