Arrow Research search

Author name cluster

Hangyu Ye

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

AAAI Conference 2025 Conference Paper

AdaGK-SGD: Adaptive Global Knowledge Guided Distributed Stochastic Gradient Descent

  • Hangyu Ye
  • Weiying Xie
  • Yunsong Li
  • Leyuan Fang

Distributed machine learning (DML) is promising for training large models on large datasets. In DML, multiple workers collaborate on the training of neural networks, significantly reducing the time required for neural network training. The efficiency of DML is heavily influenced by communication, making it crucial to balance the trade-off between communication cost and model performance in current research. Local methods are excellent at reducing communication costs, yet face degradation in accuracy and generalizability. Indeed, global knowledge is valuable for improving performance in local methods. However, the theoretical analysis of global knowledge validity is lacking, and global knowledge can currently only be used in the global aggregation of local methods due to communication limitations and staleness. To this end, in this paper, we establish the mechanism of global knowledge guidance and propose Adaptive Global Knowledge Guided Distributed Stochastic Gradient Descent (AdaGK-SGD) to extend the guidance of global knowledge to the whole distributed training process without any additional communication. Specifically, we define the maximum lifetime of global knowledge based on the mechanism, and establish a correlation between the maximum lifetime and the validity of global knowledge to circumvent the adverse effects of global knowledge staleness. The Maximum Lifetime of Global Knowledge module of our algorithm can be applied separately to other algorithms. In addition, considering the application, we provide a straightforward and efficient strategy for achieving the maximum lifetime adaptive setting. We establish the convergence rate of AdaGK-SGD for convex and non-convex scenarios. Numerically, we find that AdaGK-SGD can significantly improve the accuracy and generalizability of distributed algorithms compared with existing methods.

AAAI Conference 2025 Conference Paper

Aligning and Prompting Anything for Zero-Shot Generalized Anomaly Detection

  • Jitao Ma
  • Weiying Xie
  • Hangyu Ye
  • Daixun Li
  • Leyuan Fang

Zero-shot generalized anomaly detection (ZGAD) plays a critical role in industrial automation and health screening. Recent studies have shown that ZGAD methods built on visual-language models (VLMs) like CLIP have excellent cross-domain detection performance. Different from other computer vision tasks, ZGAD needs to jointly optimize both image-level anomaly classification and pixel-level anomaly segmentation tasks for determining whether an image contains anomalies and detecting anomalous parts of an image, respectively, this leads to different granularity of the tasks. However, existing methods ignore this problem, processing these two tasks with one set of broad text prompts used to describe the whole image. This limits CLIP to align textual features with pixel-level visual features and impairs anomaly segmentation performance. Therefore, for precise visual-text alignment, in this paper we propose a novel fine-grained text prompts generation strategy. We then apply the broad text prompts and the generated fine-grained text prompts for visual-textual alignment in classification and segmentation tasks, respectively, accurately capturing normal and anomalous instances in images. We also introduce the Text Prompt Shunt (TPS) model, which performs joint learning by reconstruction the complementary and dependency relationships between the two tasks to enhance anomaly detection performance. This enables our method to focus on fine-grained segmentation of anomalous targets while ensuring accurate anomaly classification, and achieve pixel-level comprehensible CLIP for the first time in the ZGAD task. Extensive experiments on 13 real-world anomaly detection datasets demonstrate that TPS achieves superior ZGAD performance across highly diverse datasets from industrial and medical domains.