Arrow Research search

Author name cluster

Haocheng Luo

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
2 author rows

Possible papers

3

IROS Conference 2025 Conference Paper

Spatial-Temporal Graph Contrastive Learning with Decreasing Masks for Traffic Flow Forecasting

  • Bin Ren
  • Yongfa Zhang
  • Yamin Wen
  • Haocheng Luo
  • Hao Zhang
  • Chunhong He

In recent years, Contrastive learning has shown great potential in traffic flow prediction tasks. However, existing contrastive learning methods have difficulties in dealing with missing data and noise, and it is difficult to fully capture local and global correlations by relying on a single contrast method. In this paper, a Decreasing Mask Spatio-Temporal Graph Comparison Learning Model (DMSTGCL) is proposed. The model dynamically adjusts the mask ratio through the adaptive mask reduction technique to effectively deal with the problem of missing data and noise. Meanwhile, the projection head is further combined with the TripleAttention mechanism in the spatio-temporal contrast learning process, which overcomes the limitations of a single contrast method and captures the complex relationships in local and global space more effectively. Experiments on three real-world datasets demonstrate that DMSTGCL achieves significantly higher prediction accuracy than existing methods.

NeurIPS Conference 2025 Conference Paper

Unveiling m-Sharpness Through the Structure of Stochastic Gradient Noise

  • Haocheng Luo
  • Mehrtash Harandi
  • Dinh Phung
  • Trung Le

Sharpness-aware minimization (SAM) has emerged as a highly effective technique to improve model generalization, but its underlying principles are not fully understood. We investigate m-sharpness, where SAM performance improves monotonically as the micro-batch size for computing perturbations decreases, a phenomenon critical for distributed training yet lacking rigorous explanation. We leverage an extended Stochastic Differential Equation (SDE) framework and analyze stochastic gradient noise (SGN) to characterize the dynamics of SAM variants, including n-SAM and m-SAM. Our analysis reveals that stochastic perturbations induce an implicit variance-based sharpness regularization whose strength increases as m decreases. Motivated by this insight, we propose Reweighted SAM (RW-SAM), which employs sharpness-weighted sampling to mimic the generalization benefits of m-SAM while remaining parallelizable. Comprehensive experiments validate our theory and method.

NeurIPS Conference 2024 Conference Paper

Explicit Eigenvalue Regularization Improves Sharpness-Aware Minimization

  • Haocheng Luo
  • Tuan Truong
  • Tung Pham
  • Mehrtash Harandi
  • Dinh Phung
  • Trung Le

Sharpness-Aware Minimization (SAM) has attracted significant attention for its effectiveness in improving generalization across various tasks. However, its underlying principles remain poorly understood. In this work, we analyze SAM’s training dynamics using the maximum eigenvalue of the Hessian as a measure of sharpness and propose a third-order stochastic differential equation (SDE), which reveals that the dynamics are driven by a complex mixture of second- and third-order terms. We show that alignment between the perturbation vector and the top eigenvector is crucial for SAM’s effectiveness in regularizing sharpness, but find that this alignment is often inadequate in practice, which limits SAM's efficiency. Building on these insights, we introduce Eigen-SAM, an algorithm that explicitly aims to regularize the top Hessian eigenvalue by aligning the perturbation vector with the leading eigenvector. We validate the effectiveness of our theory and the practical advantages of our proposed approach through comprehensive experiments. Code is available at https: //github. com/RitianLuo/EigenSAM.