Arrow Research search

Author name cluster

Binhan Yang

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

AAAI Conference 2026 Conference Paper

A Novel Retrieve-Read-Group Paradigm for Open Knowledge Base Canonicalization

  • Binhan Yang
  • Wei Shen
  • Han Tian

Noun phrases (NPs) in open knowledge bases (OKBs) are not canonicalized, leading to scattered knowledge that necessitates the exploration of the OKB canonicalization task (i.e., clustering synonymous noun phrases into the same group and assigning them a unique identifier). However, existing OKB canonicalization methods typically adhere to a traditional embedding-centered pipeline, which fails to exploit the direct interaction between NPs for pairwise NP similarity calculations, resulting in suboptimal performance and instead relying extensively on external resources. To address these limitations, we introduce a groundbreaking retrieve-read-group paradigm that enables fine-grained pairwise NP similarity calculations by effectively leveraging the direct NP interaction via the reading stage, thereby relieving the reliance on external resources. As an instantiation of this paradigm, we propose DUVK, a novel self-supervised framework that fully integrates the dual-view knowledge involved in OKBs from the relational view and the semantic view. In the retriever component of DUVK, a dual-view cross-training strategy is designed to make two view-specific encoders mutually reinforce each other by capitalizing on the complementary knowledge delivered from both views. Experimental results demonstrate that, even without the need of any external resources, DUVK outperforms all state-of-the-art competitors that rely on such resources.

NeurIPS Conference 2025 Conference Paper

Federated Continual Learning via Orchestrating Multi-Scale Expertise

  • Xiaoyang Yi
  • Yang Liu
  • Binhan Yang
  • Jian Zhang

Federated continual learning (FCL) aims to maintain the model's performance on old tasks (i. e. , stability) while enhancing its ability to acquire knowledge from current tasks (i. e. , plasticity). With the development of pre-trained models (PTMs), fine-tuning PTMs on clients has become a promising approach to leveraging their extensive knowledge in FCL. In this paper, we propose MultiFCL, a novel FCL framework that fine-tunes PTMs to adapt to FCL while preserving their strong generalization capabilities. Specifically, to ensure the stability, MultiFCL introduces lightweight adapters for task adaption, which are subsequently frozen to prevent catastrophic forgetting. Moreover, by utilizing the semantic features of old tasks, MultiFCL performs multi-modal initialization of new task class prototypes. To enhance the plasticity, MultiFCL employs a multi-expert training mechanism that integrates multi-scale feature learning with multi-teacher dynamic self-distillation. Through intra-client and inter-client expert communication, MultiFCL facilitates cross-task and cross-client knowledge fusion. Experimental results demonstrate that MultiFCL achieves state-of-the-art performance across multiple datasets and settings, showcasing its effectiveness in FCL scenarios.