Arrow Research search

Author name cluster

Chao Jiang

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

EAAI Journal 2025 Journal Article

Hyperspectral imaging for rapid impurity detection in power system liquids

  • Liang Xue
  • Li Zhang
  • Zhuoyue Yang
  • Youhua Jiang
  • Chao Jiang
  • Haoyang Cui

The chemical integrity of power system liquids, such as coolants and transformer oils, is critical for the reliable operation of energy systems. Contaminants such as carbon, iron, copper, and tin can compromise cooling efficiency, increase failure risks, reduce equipment lifespan, and cause electrical malfunctions, thereby threatening the safety and stability of these systems. This study presents an innovative approach that integrates hyperspectral imaging (HSI) with machine learning (ML) algorithms to identify and quantify impurities in these liquids. A weighted ensemble model, referred to as the WeightedEnsemble_L2 model, has been developed and optimized. This model utilizes thirteen advanced machine-learning algorithms to identify impurities by analyzing spectral signatures across a broad wavelength range. The implemented artificial intelligence (AI) model demonstrates 90 % accuracy on the training set and 87. 53 % on the validation set. This novel approach offers a robust solution for impurity detection in power system liquids, supporting predictive maintenance and enhancing the safety and stability of energy systems through the practical application of AI technology.

AAAI Conference 2025 Conference Paper

Trading Off Quality and Uncertainty Through Multi-Objective Optimisation in Batch Bayesian Optimisation

  • Chao Jiang
  • Miqing Li

Batch Bayesian Optimisation (BBO) has emerged as a potent approach for optimising expensive black-box functions. Central to BBO is the issue of selecting a number of solutions at the same time through a batch method, in the hope for them to represent good, yet different, trade-offs between exploitation and exploration. To address this issue, one of the recent advancements has leveraged multi-objective optimisation to simultaneously consider several acquisition functions (e.g., PI, EI, and LCB), allowing them to complement each other. However, acquisition functions may behave similarly (since they all aim for a good balance between exploitation and exploration), restricting the search on different promising areas. In this paper, we attempt to address the above issue. We directly treat exploitation (reflected by quality, i.e., the posterior mean) and exploration (reflected by uncertainty) as two objectives. When selecting trade-off solutions between the two objectives, we consider a dynamically updated Pareto front where the uncertainty changes once a solution is selected, thereby allowing exploration on different promising areas. Through an extensive experiment study, we show the effectiveness of the proposed method in comparison with state-of-the-arts in the area.

AAAI Conference 2020 Conference Paper

Discourse Level Factors for Sentence Deletion in Text Simplification

  • Yang Zhong
  • Chao Jiang
  • Wei Xu
  • Junyi Jessy Li

This paper presents a data-driven study focusing on analyzing and predicting sentence deletion — a prevalent but understudied phenomenon in document simplification — on a large English text simplification corpus. We inspect various document and discourse factors associated with sentence deletion, using a new manually annotated sentence alignment corpus we collected. We reveal that professional editors utilize different strategies to meet readability standards of elementary and middle schools. To predict whether a sentence will be deleted during simplification to a certain level, we harness automatically aligned data to train a classification model. Evaluated on our manually annotated data, our best models reached F1 scores of 65. 2 and 59. 7 for this task at the levels of elementary and middle school, respectively. We find that discourse level factors contribute to the challenging task of predicting sentence deletion for simplification.