Arrow Research search

Author name cluster

Kaize Shi

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

AAAI Conference 2026 Conference Paper

Exploring Selective Avoidance for Online User Behavior Analysis: A Forest of Thought Explanation

  • Xiaohua Wu
  • Lin Li
  • Kaize Shi
  • Xiaohui Tao
  • Jianwei Zhang
  • Yuefeng Li

The response behaviors observed in online user-generated content (UGC) frequently demonstrate non-linear characteristics, such as conditional branching and selective avoidance. These patterns present additional challenges for ensuring the trustworthiness of Large Language Model (LLMs) reasoning, particularly as their unidirectional, left-to-right inference mechanisms may not adequately capture such complex reasoning dynamics. To address this, we propose a Forest of Thought Explanation (FoTE), a novel prompting that models the selective avoidance in UGC while ensuring explanation consensus through reasoning paths across all decision sub-trees. FoTE firstly generates various reasoning paths through an adaptive CoT prompting. Each generated thought is subsequently evaluated through cooperative game theory to quantify its fair influence. The thoughts with the top-k contribution scores are preserved and randomly sampled to emulate selective avoidance for the next reasoning iteration. Through extensive evaluations across three open-source LLMs and two established social science problems (spanning four benchmark datasets), FoTE demonstrates superior success rates compared to competing prompting strategies. Notably, its performance gains increase with the strength of selective avoidance in social problems. The trustworthiness of our FoTE is enhanced by the incorporation of (1) a solid theoretical foundation and (2) a transparent reasoning path that converges toward consensus.

NeurIPS Conference 2025 Conference Paper

Factor Decorrelation Enhanced Data Removal from Deep Predictive Models

  • Wenhao Yang
  • Lin Li
  • Xiaohui Tao
  • Kaize Shi

The imperative of user privacy protection and regulatory compliance necessitates sensitive data removal in model training, yet this process often induces distributional shifts that undermine model performance-particularly in out-of-distribution (OOD) scenarios. We propose a novel data removal approach that enhances deep predictive models through factor decorrelation and loss perturbation. Our approach introduces: (1) a discriminative-preserving factor decorrelation module employing dynamic adaptive weight adjustment and iterative representation updating to reduce feature redundancy and minimize inter-feature correlations. (2) a smoothed data removal mechanism with loss perturbation that creates information-theoretic safeguards against data leakage during removal operations. Extensive experiments on five benchmark datasets show that our approach outperforms other baselines and consistently achieves high predictive accuracy and robustness even under significant distribution shifts. The results highlight its superior efficiency and adaptability in both in-distribution and out-of-distribution scenarios.