Arrow Research search

Author name cluster

Jiangnan Li

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

9 papers
1 author row

Possible papers

9

AAAI Conference 2026 Conference Paper

Improving Exact Algorithm for Pseudo Boolean Optimization with Two New Phase Selection Heuristics

  • Yujiao Zhao
  • Yizhan Xiang
  • Jiangnan Li
  • Yiyuan Wang
  • Minghao Yin

Pseudo-Boolean optimization (PBO) problem involves optimizing a linear objective function under linear inequality constraints defined over Boolean variables. PBO is widely used for modeling many combinational optimization problems, particularly in some real-world scenarios. In core-guided CDCL-based exact solvers, the way branching variables are assigned, known as phase selection, significantly affects the solving efficiency. This paper introduces two strategies to enhance solver performance by improving phase selection. Firstly, we design a new phase selection strategy that actively guides variables in the objective function toward assignments closer to the optimal solution. Secondly, to prevent the solver from becoming trapped in local solutions, we propose a reinforcement learning-based rephase mechanism that dynamically updates and resets variable phases. We integrate two phase selection strategies into two state-of-the-art PBO solvers and compare them against top-performing solvers from the PB competitions, using benchmarks from these competitions for assessment. The experimental results show that our solvers outperform the winning solver from the competitions.

IJCAI Conference 2025 Conference Paper

A Novel Local Search Algorithm for the Vertex Bisection Minimization Problem

  • Rui Sun
  • Xinyu Wang
  • Yiyuan Wang
  • Jiangnan Li
  • Yi Zhou

The vertex bisection minimization problem (VBMP) is a fundamental graph partitioning problem with numerous real-world applications. In this study, we propose a (k, l, S)-cluster guided local search algorithm to address this challenge. First, we propose a novel (k, l, S)-cluster enumeration procedure, which is based on two key concepts: the (k, l, S)-cluster and the local cluster core. The (k, l, S)-cluster limits both the connectivity and distinct boundaries of a given vertex set, and the local cluster core represents the most cohesive substructure within a (k, l, S)-cluster. Building up on the above (k, l, S)-cluster enumeration procedure, we present a novel (k, l, S)-cluster guided perturbation mechanism designed to escape from local optima. Next, we propose a two-manner local search procedure that employs two distinct search models to explore the neighboring search space efficiently. Experimental results demonstrate that the proposed algorithm performs best on nearly all instances.

IJCAI Conference 2025 Conference Paper

NuMDS: An Efficient Local Search Algorithm for Minimum Dominating Set Problem

  • Rui Sun
  • Zhaohui Liu
  • Yiyuan Wang
  • Han Xiao
  • Jiangnan Li
  • Jiejiang Chen

The minimum dominating set (MDS) problem is a crucial NP-hard combinatorial optimization problem with wide applications in real-world scenarios. In this paper, we propose an efficient local search algorithm namely NuMDS to solve the MDS, which comprises three key ideas. First, we introduce a dominate propagation-based reduction method that fixes a portion of vertices in a given graph. Second, we develop a novel two-phase initialization method based on the decomposition method. Third, we propose a multi-stage local search procedure, which adopts three different search manners according to the current stage of the search. We conduct extensive experiments to demonstrate the outstanding effectiveness of NuMDS, and the results clearly indicate that NuMDS outperforms previous state-of-the-art algorithms on almost all instances.

AAAI Conference 2024 Conference Paper

Enhancing Multi-Label Classification via Dynamic Label-Order Learning

  • Jiangnan Li
  • Yice Zhang
  • Shiwei Chen
  • Ruifeng Xu

Generative methods tackle Multi-Label Classification (MLC) by autoregressively generating label sequences. These methods excel at modeling label correlations and have achieved outstanding performance. However, a key challenge is determining the order of labels, as empirical findings indicate the significant impact of different orders on model learning and inference. Previous works adopt static label-ordering methods, assigning a unified label order for all samples based on label frequencies or co-occurrences. Nonetheless, such static methods neglect the unique semantics of each sample. More critically, these methods can cause the model to rigidly memorize training order, resulting in missing labels during inference. In light of these limitations, this paper proposes a dynamic label-order learning approach that adaptively learns a label order for each sample. Specifically, our approach adopts a difficulty-prioritized principle and iteratively constructs the label sequence based on the sample s semantics. To reduce the additional cost incurred by label-order learning, we use the same SEQ2SEQ model for label-order learning and MLC learning and introduce a unified loss function for joint optimization. Extensive experiments on public datasets reveal that our approach greatly outperforms previous methods. We will release our code at https: //github.com/KagamiBaka/DLOL.

AAAI Conference 2024 Conference Paper

SIG: Speaker Identification in Literature via Prompt-Based Generation

  • Zhenlin Su
  • Liyan Xu
  • Jin Xu
  • Jiangnan Li
  • Mingdu Huangfu

Identifying speakers of quotations in narratives is an important task in literary analysis, with challenging scenarios including the out-of-domain inference for unseen speakers, and non-explicit cases where there are no speaker mentions in surrounding context. In this work, we propose a simple and effective approach SIG, a generation-based method that verbalizes the task and quotation input based on designed prompt templates, which also enables easy integration of other auxiliary tasks that further bolster the speaker identification performance. The prediction can either come from direct generation by the model, or be determined by the highest generation probability of each speaker candidate. Based on our approach design, SIG supports out-of-domain evaluation, and achieves open-world classification paradigm that is able to accept any forms of candidate input. We perform both cross-domain evaluation and in-domain evaluation on PDNC, the largest dataset of this task, where empirical results suggest that SIG outperforms previous baselines of complicated designs, as well as the zero-shot ChatGPT, especially excelling at those hard non-explicit scenarios by up to 17% improvement. Additional experiments on another dataset WP further corroborate the efficacy of SIG.

NeurIPS Conference 2022 Conference Paper

A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models

  • Yuanxin Liu
  • Fandong Meng
  • Zheng Lin
  • Jiangnan Li
  • Peng Fu
  • Yanan Cao
  • Weiping Wang
  • Jie Zhou

Despite the remarkable success of pre-trained language models (PLMs), they still face two challenges: First, large-scale PLMs are inefficient in terms of memory footprint and computation. Second, on the downstream tasks, PLMs tend to rely on the dataset bias and struggle to generalize to out-of-distribution (OOD) data. In response to the efficiency problem, recent studies show that dense PLMs can be replaced with sparse subnetworks without hurting the performance. Such subnetworks can be found in three scenarios: 1) the fine-tuned PLMs, 2) the raw PLMs and then fine-tuned in isolation, and even inside 3) PLMs without any parameter fine-tuning. However, these results are only obtained in the in-distribution (ID) setting. In this paper, we extend the study on PLMs subnetworks to the OOD setting, investigating whether sparsity and robustness to dataset bias can be achieved simultaneously. To this end, we conduct extensive experiments with the pre-trained BERT model on three natural language understanding (NLU) tasks. Our results demonstrate that \textbf{sparse and robust subnetworks (SRNets) can consistently be found in BERT}, across the aforementioned three scenarios, using different training and compression methods. Furthermore, we explore the upper bound of SRNets using the OOD information and show that \textbf{there exist sparse and almost unbiased BERT subnetworks}. Finally, we present 1) an analytical study that provides insights on how to promote the efficiency of SRNets searching process and 2) a solution to improve subnetworks' performance at high sparsity. The code is available at \url{https: //github. com/llyx97/sparse-and-robust-PLM}.

IJCAI Conference 2022 Conference Paper

Neutral Utterances are Also Causes: Enhancing Conversational Causal Emotion Entailment with Social Commonsense Knowledge

  • Jiangnan Li
  • Fandong Meng
  • Zheng Lin
  • Rui Liu
  • Peng Fu
  • Yanan Cao
  • Weiping Wang
  • Jie Zhou

Conversational Causal Emotion Entailment aims to detect causal utterances for a non-neutral targeted utterance from a conversation. In this work, we build conversations as graphs to overcome implicit contextual modelling of the original entailment style. Following the previous work, we further introduce the emotion information into graphs. Emotion information can markedly promote the detection of causal utterances whose emotion is the same as the targeted utterance. However, it is still hard to detect causal utterances with different emotions, especially neutral ones. The reason is that models are limited in reasoning causal clues and passing them between utterances. To alleviate this problem, we introduce social commonsense knowledge (CSK) and propose a Knowledge Enhanced Conversation graph (KEC). KEC propagates the CSK between two utterances. As not all CSK is emotionally suitable for utterances, we therefore propose a sentiment-realized knowledge selecting strategy to filter CSK. To process KEC, we further construct the Knowledge Enhanced Directed Acyclic Graph networks. Experimental results show that our method outperforms baselines and infers more causes with different emotions from the targeted utterance.

IJCAI Conference 2021 Conference Paper

Learning Class-Transductive Intent Representations for Zero-shot Intent Detection

  • Qingyi Si
  • Yuanxin Liu
  • Peng Fu
  • Zheng Lin
  • Jiangnan Li
  • Weiping Wang

Zero-shot intent detection (ZSID) aims to deal with the continuously emerging intents without annotated training data. However, existing ZSID systems suffer from two limitations: 1) They are not good at modeling the relationship between seen and unseen intents. 2) They cannot effectively recognize unseen intents under the generalized intent detection (GZSID) setting. A critical problem behind these limitations is that the representations of unseen intents cannot be learned in the training stage. To address this problem, we propose a novel framework that utilizes unseen class labels to learn Class-Transductive Intent Representations (CTIR). Specifically, we allow the model to predict unseen intents during training, with the corresponding label names serving as input utterances. On this basis, we introduce a multi-task learning objective, which encourages the model to learn the distinctions among intents, and a similarity scorer, which estimates the connections among intents more accurately. CTIR is easy to implement and can be integrated with existing ZSID and GZSID methods. Experiments on two real-world datasets show that CTIR brings considerable improvement to the baseline systems.