Arrow Research search

Author name cluster

Chenghao Li

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

8 papers
1 author row

Possible papers

8

JMLR Journal 2026 Journal Article

A Data-Augmented Contrastive Learning Approach to Nonparametric Density Estimation

  • Chenghao Li
  • Yuanyuan Lin

In this paper, we introduce a data-augmented nonparametric noise contrastive estimation method to density estimation using deep neural networks. By leveraging the idea of contrastive learning, our density estimator exhibits efficiency with a one-step and simulation-free evaluation process, imposes no constraints on the neural network, and is shown to be consistent and asymptotically automatically normalized. A novel data augmentation procedure allows us to mitigate the influence of the choice of reference distribution on our method. Non-asymptotic upper bounds for the expected $L_{2}$-risk and the expected total variation distance have been established, which achieve minimax optimal rates. Moreover, our new method exhibits inherent adaptivity to low dimensional structures of data with a faster convergence rate under a compositional structure assumption. Numerical experiments show the competitiveness of our new method compared with the state-of-the-art nonparametric density estimation methods. [abs] [ pdf ][ bib ] [ code ] &copy JMLR 2026. ( edit, beta )

EAAI Journal 2026 Journal Article

Neural network-driven shape representation and computational particle mechanics via signed distance fields

  • Chenghao Li
  • Zhengshou Lai
  • Shuai Huang
  • Linchong Huang

This study presents a framework for shape representation and computational particle mechanics of granular materials using neural network-encoded signed distance fields. The approach leverages a neural network to learn and represent a signed distance field, mapping spatial points to their signed distance from the particle surface. Two neural network models are explored: one incorporating a latent code to capture shape variations, and the other without such encoding. The accuracy of these models in capturing particle morphology is rigorously evaluated, and their ability to generate new particles with realistic shapes is demonstrated. The proposed neural network-based approach is seamlessly integrated into the signed distance field-based discrete element method, enabling efficient and robust modeling of granular particles with arbitrary shapes. The integration is validated through discrete element-based simulations, demonstrating its effectiveness in particle mechanics applications. Additionally, memory consumption and computational performance are analyzed. These contributions position the neural network-encoded signed distance fields framework as a versatile and powerful tool for advancing computational modeling of granular materials.

JBHI Journal 2026 Journal Article

Structure and Semantics Aware Multi-View Contrastive Learning for predicting association among lncRNAs, miRNAs and diseases

  • Lan Huang
  • Yujuan Zhang
  • Chenghao Li
  • Yuan Fu
  • Yan Wang
  • Nan Sheng

Exploring associations among long non coding RNAs (lncRNAs), microRNAs (miRNAs), and dis eases is crucial for biomarker discovery and precision medicine. Existing computational methods are hindered by sparse known associations and the complexity of bi ological networks. To address this challenge, we pro pose SSMVCL (Structure- and Semantic-aware Multi-View Contrastive Learning), a unified framework for predicting lncRNA-disease associations (LDAs), miRNA-disease associations (MDAs), and lncRNA-miRNA interactions (LMIs). SSMVCL constructs a heterogeneous bioinformatics network from multi-source biological data and learns representations from two complementary views: a structure-aware view for local topology and a semantic-aware view using biologically meaningful meta-paths to capture high-order relationships. A cross-view contrastive alignment module with adaptive negative sampling enforces consistency between views and enhances discriminative capability. On two benchmark datasets, SSMVCL achieves state-of-the art performance: for Dataset2, AUC/AUPR of 0. 9736/0. 9716 (LDA), 0. 9364/0. 9309 (MDA), and 0. 9297/0. 9234 (LMI) Case studies on gastric and prostate cancers further validated robustness and translational potential by identifying supported associations.

NeurIPS Conference 2025 Conference Paper

Continual Knowledge Adaptation for Reinforcement Learning

  • Jinwu Hu
  • ZiHao Lian
  • Zhiquan Wen
  • Chenghao Li
  • Guohao Chen
  • Xutao Wen
  • Bin Xiao
  • Mingkui Tan

Reinforcement Learning enables agents to learn optimal behaviors through interactions with environments. However, real-world environments are typically non-stationary, requiring agents to continuously adapt to new tasks and changing conditions. Although Continual Reinforcement Learning facilitates learning across multiple tasks, existing methods often suffer from catastrophic forgetting and inefficient knowledge utilization. To address these challenges, we propose Continual Knowledge Adaptation for Reinforcement Learning (CKA-RL), which enables the accumulation and effective utilization of historical knowledge. Specifically, we introduce a Continual Knowledge Adaptation strategy, which involves maintaining a task-specific knowledge vector pool and dynamically using historical knowledge to adapt the agent to new tasks. This process mitigates catastrophic forgetting and enables efficient knowledge transfer across tasks by preserving and adapting critical model parameters. Additionally, we propose an Adaptive Knowledge Merging mechanism that combines similar knowledge vectors to address scalability challenges, reducing memory requirements while ensuring the retention of essential knowledge. Experiments on three benchmarks demonstrate that the proposed CKA-RL outperforms state-of-the-art methods, achieving an improvement of 4. 20% in overall performance and 8. 02% in forward transfer. The source code is available at https: //github. com/Fhujinwu/CKA-RL.

AAAI Conference 2024 Conference Paper

Learning Diverse Risk Preferences in Population-Based Self-Play

  • Yuhua Jiang
  • Qihan Liu
  • Xiaoteng Ma
  • Chenghao Li
  • Yiqin Yang
  • Jun Yang
  • Bin Liang
  • Qianchuan Zhao

Among the remarkable successes of Reinforcement Learning (RL), self-play algorithms have played a crucial role in solving competitive games. However, current self-play RL methods commonly optimize the agent to maximize the expected win-rates against its current or historical copies, resulting in a limited strategy style and a tendency to get stuck in local optima. To address this limitation, it is important to improve the diversity of policies, allowing the agent to break stalemates and enhance its robustness when facing with different opponents. In this paper, we present a novel perspective to promote diversity by considering that agents could have diverse risk preferences in the face of uncertainty. To achieve this, we introduce a novel reinforcement learning algorithm called Risk-sensitive Proximal Policy Optimization (RPPO), which smoothly interpolates between worst-case and best-case policy learning, enabling policy learning with desired risk preferences. Furthermore, by seamlessly integrating RPPO with population-based self-play, agents in the population optimize dynamic risk-sensitive objectives using experiences gained from playing against diverse opponents. Our empirical results demonstrate that our method achieves comparable or superior performance in competitive games and, importantly, leads to the emergence of diverse behavioral modes. Code is available at https://github.com/Jackory/RPBT.

NeurIPS Conference 2021 Conference Paper

Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement Learning

  • Yiqin Yang
  • Xiaoteng Ma
  • Chenghao Li
  • Zewu Zheng
  • Qiyuan Zhang
  • Gao Huang
  • Jun Yang
  • Qianchuan Zhao

Learning from datasets without interaction with environments (Offline Learning) is an essential step to apply Reinforcement Learning (RL) algorithms in real-world scenarios. However, compared with the single-agent counterpart, offline multi-agent RL introduces more agents with the larger state and action space, which is more challenging but attracts little attention. We demonstrate current offline RL algorithms are ineffective in multi-agent systems due to the accumulated extrapolation error. In this paper, we propose a novel offline RL algorithm, named Implicit Constraint Q-learning (ICQ), which effectively alleviates the extrapolation error by only trusting the state-action pairs given in the dataset for value estimation. Moreover, we extend ICQ to multi-agent tasks by decomposing the joint-policy under the implicit constraint. Experimental results demonstrate that the extrapolation error is successfully controlled within a reasonable range and insensitive to the number of agents. We further show that ICQ achieves the state-of-the-art performance in the challenging multi-agent offline tasks (StarCraft II). Our code is public online at https: //github. com/YiqinYang/ICQ.

NeurIPS Conference 2021 Conference Paper

Celebrating Diversity in Shared Multi-Agent Reinforcement Learning

  • Chenghao Li
  • Tonghan Wang
  • Chengjie Wu
  • Qianchuan Zhao
  • Jun Yang
  • Chongjie Zhang

Recently, deep multi-agent reinforcement learning (MARL) has shown the promise to solve complex cooperative tasks. Its success is partly because of parameter sharing among agents. However, such sharing may lead agents to behave similarly and limit their coordination capacity. In this paper, we aim to introduce diversity in both optimization and representation of shared multi-agent reinforcement learning. Specifically, we propose an information-theoretical regularization to maximize the mutual information between agents' identities and their trajectories, encouraging extensive exploration and diverse individualized behaviors. In representation, we incorporate agent-specific modules in the shared neural network architecture, which are regularized by L1-norm to promote learning sharing among agents while keeping necessary diversity. Empirical results show that our method achieves state-of-the-art performance on Google Research Football and super hard StarCraft II micromanagement tasks.

AAMAS Conference 2021 Conference Paper

Modeling the Interaction between Agents in Cooperative Multi-Agent Reinforcement Learning

  • Xiaoteng Ma
  • Yiqin Yang
  • Chenghao Li
  • Yiwen Lu
  • Qianchuan Zhao
  • Jun Yang

Value-based methods of multi-agent reinforcement learning (MARL), especially the value decomposition methods, have been demonstrated on a range of challenging cooperative tasks. However, current methods pay little attention to the interaction between agents, which is essential to teamwork in games or real life. This limits the efficiency of value-based MARL algorithms in the two aspects: collaborative exploration and value function estimation. In this paper, we propose a novel cooperative MARL algorithm named as interactive actor-critic (IAC), which models the interaction of agents from the perspectives of policy and value function. On the policy side, a multi-agent joint stochastic policy is introduced by adopting a collaborative exploration module, which is trained by maximizing the entropy-regularized expected return. On the value side, we use the shared attention mechanism to estimate the value function of each agent, which takes the impact of the teammates into consideration. At the implementation level, we extend the value decomposition methods to continuous control tasks and evaluate IAC on benchmark tasks including classic control and multi-agent particle environments. Experimental results indicate that our method outperforms the state-of-the-art approaches and achieves better performance in terms of cooperation.