Arrow Research search

Author name cluster

Haoran Yang

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

9 papers
2 author rows

Possible papers

9

EAAI Journal 2026 Journal Article

Two-phase strategy framework for spatial prediction of landslide hazards in wide-area power linear engineering projects: the case of the China's Renewable Energy Transmission Corridors

  • Bijing Jin
  • Kunlong Yin
  • Taorui Zeng
  • Shuhao Liu
  • Yang Liu
  • Haoran Yang
  • Kai Wang
  • Lei Gui

A critical knowledge gap persists in the development of high-precision spatial prediction frameworks for landslide susceptibility assessment along wide-area linear power infrastructure. Therefore, this study develops a novel two-phase optimization framework to address this gap, focusing on China's Renewable Energy Transmission Corridors (RETCs). Phase Ⅰ employs natural breaks (optimal at 26-level grading) to address spatial heterogeneity in conditioning factors, while in Phase Ⅱ the selection of non-landslide sample is optimized based on different geological environment zones and areas with lower susceptibility levels. Six base machine learning models were evaluated, with two ensemble models (Stacking and Blending) achieving superior performance, achieving an Area Under the Curve (AUC) value exceeding 0. 88. The Blending model demonstrated peak accuracy (AUC = 0. 927), identifying 35% of transmission towers in high and very high susceptibility zones across nine provinces. The framework enables tower-specific susceptibility assessment, crucial for protecting China's 80, 000 km transmission network. These findings advance RETCs resilience by: (1) establishing continuous conditioning factor optimal grading strategy for linear infrastructure, (2) introducing a replicable non-landslide sample optimization protocol, and (3) demonstrating ensemble models superiority in energy corridor landslide susceptibility mapping. This framework provides robust support for securing stable clean energy delivery, with potential applications in global renewable energy grid landslide hazards management.

AAAI Conference 2025 Conference Paper

Fast Track to Winning Tickets: Repowering One-Shot Pruning for Graph Neural Networks

  • Yanwei Yue
  • Guibin Zhang
  • Haoran Yang
  • Dawei Cheng

Graph Neural Networks (GNNs) demonstrate superior performance in various graph learning tasks, yet their wider real-world application is hindered by the computational overhead when applied to large-scale graphs. To address the issue, the Graph Lottery Hypothesis (GLT) has been proposed, advocating the identification of subgraphs and subnetworks, i.e., winning tickets, without compromising performance. The effectiveness of current GLT methods largely stems from the use of iterative magnitude pruning (IMP), which offers greater stability and better performance than one-shot pruning. However, identifying GLTs is highly computationally expensive, due to the iterative pruning and retraining required by IMP. In this paper, we reevaluate the correlation between one-shot pruning and IMP: while one-shot tickets are suboptimal compared to IMP, they offer a fast track to tickets with a stronger performance. We introduce a one-shot pruning and denoising framework to validate the efficacy of the fast track. Compared to current IMP-based GLT methods, our framework achieves a double-win situation of graph lottery tickets with higher sparsity and faster speeds. Through extensive experiments across 4 backbones and 6 datasets, our method demonstrates a 1.32%-45.62% improvement in weight sparsity and a 7.49%-22.71% increase in graph sparsity, along with a 1.7-44× speedup over IMP-based methods and 95.3%-98.6% MAC savings.

AAAI Conference 2025 Conference Paper

Focus on Local: Finding Reliable Discriminative Regions for Visual Place Recognition

  • Changwei Wang
  • Shunpeng Chen
  • Yukun Song
  • Rongtao Xu
  • Zherui Zhang
  • Jiguang Zhang
  • Haoran Yang
  • Yu Zhang

Visual Place Recognition (VPR) is aimed at predicting the location of a query image by referencing a database of geotagged images. For VPR task, often fewer discriminative local regions in an image produce important effects while mundane background regions do not contribute or even cause perceptual aliasing because of easy overlap. However, existing methods lack precisely modeling and full exploitation of these discriminative regions. In addition, the lack of pixel-level correspondence supervision in the VPR dataset hinders further improvement of the local feature matching capability in the re-ranking stage. In this paper, we propose the Focus on Local (FoL) approach to stimulate the performance of image retrieval and re-ranking in VPR simultaneously by mining and exploiting reliable discriminative local regions in images and introducing pseudo-correlation supervision. First, we design two losses, Extraction-Aggregation Spatial Alignment Loss (SAL) and Foreground-Background Contrast Enhancement Loss (CEL), to explicitly model reliable discriminative local regions and use them to guide the generation of global representations and efficient re-ranking. Second, we introduce a weakly-supervised local feature training strategy based on pseudo-correspondences obtained from aggregating global features to alleviate the lack of local correspondences ground truth for the VPR task. Third, we suggest an efficient re-ranking pipeline that is efficiently and precisely based on discriminative region guidance. Finally, experimental results show that our FoL achieves the state-of-the-art on multiple VPR benchmarks in both image retrieval and re-ranking stages and also significantly outperforms existing two-stage VPR methods in terms of computational efficiency.

NeurIPS Conference 2025 Conference Paper

Jacobian-Based Interpretation of Nonlinear Neural Encoding Model

  • Haoran Yang
  • cheng yue
  • Mengfei Zuo
  • Yiheng Liu
  • Peiyang Li
  • Xiaohui Gao

In recent years, the alignment between artificial neural network (ANN) embeddings and blood oxygenation level dependent (BOLD) responses in functional magnetic resonance imaging (fMRI) via neural encoding models has significantly advanced research on neural representation mechanisms and interpretability in the brain. However, these approaches remain limited in characterizing the brain’s inherently nonlinear response properties. To address this, we propose the Jacobian-based Nonlinearity Evaluation (JNE), an interpretability metric for nonlinear neural encoding models. JNE quantifies nonlinearity by statistically measuring the dispersion of local linear mappings (Jacobians) from model representations to predicted BOLD responses, thereby approximating the nonlinearity of BOLD signals. Centered on proposing JNE as a novel interpretability metric, we validated its effectiveness through controlled simulation experiments on various activation functions and network architectures, and further verified it on real fMRI data, demonstrating a hierarchical progression of nonlinear characteristics from primary to higher-order visual cortices, consistent with established cortical organization. We further extended JNE with Sample-Specificity (JNE-SS), revealing stimulus-selective nonlinear response patterns in functionally specialized brain regions. As the first interpretability metric for quantifying nonlinear responses, JNE provides new insights into brain information processing. Code available at https: //github. com/Gaitxh/JNE.

IJCAI Conference 2025 Conference Paper

UltraModel: A Modeling Paradigm for Industrial Objects

  • Haoran Yang
  • Yinan Zhang
  • Qunshan He
  • Yuqi Ye
  • Jing Zhao
  • Wenhai Wang

As Industrial 4. 0 unfolds and digital twin technology rapidly advances, modeling techniques that can abstract real-world industrial objects into accurate and robust models, referred to modeling for industrial objects (MIO) tasks, have become increasingly crucial. However, existing works still face two major limitations. First, each of these works primarily focuses on modeling a specific industrial object. When the industrial objects change, the proposed methods often struggle to adapt. Second, they fail to fully consider latent relationships within industrial data, limiting the model’s ability to leverage the data and resulting in suboptimal performance. To address these issues, we propose a novel modeling paradigm tailored for MIO tasks, named UltraModel. Specifically, a twin model graph module is designed to construct a customized graph based on the mechanisms of industrial objects and employ graph convolution to generate high-dimensional representations. Then, a multi-scale feature abstraction module and a spatial attention-based feature fusion module are proposed to complement each other in performing multi-scale feature abstraction and fusion on high-dimensional representations. Finally, the outputs are obtained by processing the fused representations through a feedforward network. Experiments on two different industrial objects demonstrate our UltraModel outperforms existing methods, offering a novel perspective for addressing industrial modeling challenges.

NeurIPS Conference 2024 Conference Paper

Unifying Homophily and Heterophily for Spectral Graph Neural Networks via Triple Filter Ensembles

  • Rui Duan
  • Mingjian Guang
  • Junli Wang
  • Chungang Yan
  • Hongda Qi
  • Wenkang Su
  • Can Tian
  • Haoran Yang

Polynomial-based learnable spectral graph neural networks (GNNs) utilize polynomial to approximate graph convolutions and have achieved impressive performance on graphs. Nevertheless, there are three progressive problems to be solved. Some models use polynomials with better approximation for approximating filters, yet perform worse on real-world graphs. Carefully crafted graph learning methods, sophisticated polynomial approximations, and refined coefficient constraints leaded to overfitting, which diminishes the generalization of the models. How to design a model that retains the ability of polynomial-based spectral GNNs to approximate filters while it possesses higher generalization and performance? In this paper, we propose a spectral GNN with triple filter ensemble (TFE-GNN), which extracts homophily and heterophily from graphs with different levels of homophily adaptively while utilizing the initial features. Specifically, the first and second ensembles are combinations of a set of base low-pass and high-pass filters, respectively, after which the third ensemble combines them with two learnable coefficients and yield a graph convolution (TFE-Conv). Theoretical analysis shows that the approximation ability of TFE-GNN is consistent with that of ChebNet under certain conditions, namely it can learn arbitrary filters. TFE-GNN can be viewed as a reasonable combination of two unfolded and integrated excellent spectral GNNs, which motivates it to perform well. Experiments show that TFE-GNN achieves high generalization and new state-of-the-art performance on various real-world datasets.

NeurIPS Conference 2023 Conference Paper

An Empirical Study Towards Prompt-Tuning for Graph Contrastive Pre-Training in Recommendations

  • Haoran Yang
  • Xiangyu Zhao
  • Yicong Li
  • Hongxu Chen
  • Guandong Xu

Graph contrastive learning (GCL) has emerged as a potent technology for numerous graph learning tasks. It has been successfully applied to real-world recommender systems, where the contrastive loss and the downstream recommendation objectives are always combined to form the overall objective function. Such a strategy is inconsistent with the original GCL paradigm, where graph embeddings are pre-trained without involving downstream training objectives. In this paper, we innovatively propose a prompt-enhanced framework for GCL-based recommender systems, namely CPTPP, which can fully leverage the advantages of the original GCL protocol through prompt tuning. Specifically, we first summarise user profiles in graph recommender systems to automatically generate personalized user prompts. These prompts will then be combined with pre-trained user embeddings to conduct prompt-tuning in downstream tasks, thereby narrowing the distinct targets between pre-training and downstream tasks. Extensive experiments on three benchmark datasets validate the effectiveness of CPTPP against state-of-the-art baselines. A further visualization experiment demonstrates that user embeddings generated by CPTPP have a more uniform distribution, indicating a better capacity to model the diversity of user preferences. The implementation code is available online to ease reproducibility: https: //anonymous. 4open. science/r/CPTPP-F8F4

AAAI Conference 2023 Conference Paper

On the Effectiveness of Parameter-Efficient Fine-Tuning

  • Zihao Fu
  • Haoran Yang
  • Anthony Man-Cho So
  • Wai Lam
  • Lidong Bing
  • Nigel Collier

Fine-tuning pre-trained models has been ubiquitously proven to be effective in a wide range of NLP tasks. However, fine-tuning the whole model is parameter inefficient as it always yields an entirely new model for each task. Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks. These methods achieve surprisingly good performance and are shown to be more stable than their corresponding fully fine-tuned counterparts. However, such kind of methods is still not well understood. Some natural questions arise: How does the parameter sparsity lead to promising performance? Why is the model more stable than the fully fine-tuned models? How to choose the tunable parameters? In this paper, we first categorize the existing methods into random approaches, rule-based approaches, and projection-based approaches based on how they choose which parameters to tune. Then, we show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them. We indicate that the sparsity is actually imposing a regularization on the original model by controlling the upper bound of the stability. Such stability leads to better generalization capability which has been empirically observed in a lot of recent research works. Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters. Currently, the random and rule-based methods do not utilize task-specific data information while the projection-based approaches suffer from the projection discontinuity problem. To better choose the tunable parameters, we propose a novel Second-order Approximation Method (SAM) which approximates the original problem with an analytically solvable optimization function. The tunable parameters are determined by directly optimizing the approximation function. We conduct extensive experiments on several tasks. The experimental results show that our proposed SAM model outperforms many strong baseline models and it also verifies our theoretical analysis. The source code of this paper can be obtained from https://github.com/fuzihaofzh/AnalyzeParameterEff\/icientFinetune.

ECAI Conference 2020 Conference Paper

A Neural Topical Expansion Framework for Unstructured Persona-Oriented Dialogue Generation

  • Minghong Xu
  • Piji Li
  • Haoran Yang
  • Pengjie Ren
  • Zhaochun Ren
  • Zhumin Chen
  • Jun Ma 0001

Unstructured Persona-oriented Dialogue Systems (UPDS) has been demonstrated effective in generating persona consistent responses by utilizing predefined natural language user persona descriptions (e. g. , “I am a vegan”). However, the predefined user persona descriptions are usually short and limited to only a few descriptive words, which makes it hard to correlate them with the dialogues. As a result, existing methods either fail to use the persona description or use them improperly when generating persona consistent responses. To address this, we propose a neural topical expansion framework, namely Persona Exploration and Exploitation (PEE), which is able to extend the predefined user persona description with semantically correlated content before utilizing them to generate dialogue responses. PEE consists of two main modules: persona exploration and persona exploitation. The former learns to extend the predefined user persona description by mining and correlating with existing dialogue corpus using a variational auto-encoder (VAE) based topic model. The latter learns to generate persona consistent responses by utilizing the predefined and extended user persona description. In order to make persona exploitation learn to utilize user persona description more properly, we also introduce two persona-oriented loss functions: Persona-oriented Matching (P-Match) loss and Persona-oriented Bag-of-Words (P-BoWs) loss which respectively supervise persona selection in encoder and decoder. Experimental results show that our approach outperforms state-of-the-art baselines, in terms of both automatic and human evaluations.