Arrow Research search

Author name cluster

Jialong Li

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

8 papers
1 author row

Possible papers

8

TAAS Journal 2026 Journal Article

Context-Aware Proactive Self-Adaptation: A Two-Layer Model Predictive Control Approach

  • Zhengyin Chen
  • Jialong Li
  • Nianyu Li
  • Wenpin Jiao
  • Eunsuk Kang

In self-adaptive software systems, the role of context is paramount, especially for proactive self-adaptation. Current research, however, does not fully explore context's impact, for example on priorities of the requirements. To address this gap, we introduce a novel contextual goal model to capture these factors and their influence on the system. Using this, we propose a two-layer control mechanism with a context-aware model predictive control to achieve proactive adaptation for the software system and adaptation for the controller itself. By contextual prediction and a more accurate system model, our approach utilizes model predictive control to facilitate timely and efficient system adaptations, improving both performance and adaptability. Meanwhile, we perform requirement adaptation to update the contextual goal model, which in turn updates the objective function and constraints of the controller. Our experimental evaluations across two scenarios demonstrate the significant benefits of our approach in enhancing system performance.

TAAS Journal 2025 Journal Article

Adaptive Preferences: Pivoting Through User Complaints

  • Mingyue Zhang
  • Jialong Li
  • Nianyu Li
  • Eunsuk Kang
  • Kenji Tei

In the evolution of software systems, especially in domains like autonomous vehicles, dynamic user preferences are critical yet challenging to accommodate. Existing methods often misrepresent these preferences, either by overlooking their dynamism or overburdening users as humans often find it challenging to express their objectives mathematically. Addressing this, we introduce a novel framework interpreting dynamic preferences as inherent uncertainty, anchored on a “human-on-the-loop” mechanism enabling users to give feedback when dissatisfied with system behaviors. Leveraging a designed fitness function, our system employs a genetic algorithm to adapt preference values, aligning preferences with user expectations through feedback-driven adaptation. We validated its effectiveness with an autonomous driving prototype and a user study involving 20 participants. The findings highlight our framework’s capability to effectively merge algorithm-driven adjustments with user complaints, leading to improved participants’ subjective satisfaction in autonomous systems.

AAAI Conference 2025 Conference Paper

Learning Verified Safe Neural Network Controllers for Multi-Agent Path Finding

  • Mingyue Zhang
  • Nianyu Li
  • Yi Chen
  • Jialong Li
  • Xiao-Yi Zhang
  • Hengjun Zhao
  • Jiamou Liu
  • Wu Chen

Multi-agent path finding (MAPF) is a safety-critical scenario where the goal is to secure collision-free trajectories from initial to desired locations. However, due to system complexity and uncertainty, integrating learning-based controllers with MAPF is challenging and cannot theoretically guarantee the safety of the learned controllers. In response, our study proposes a verified safe multi-agent neural control (VSMANC) approach for MAPF, focusing on the unified training of Decentralized Control Barrier Functions (DCBF) and controllers to enhence safety. VSMANC enables all agents to concurrently learn controllers and DCBFs using a unified loss function designed to maximize safety, adhere to standard control policies, and incorporate path-finding-related heuristics. We also propose a formal verification-guided retraining process to both verify the properties of the learned DCBFs and generate counterexamples for retraining, thereby providing a verified safety guarantee. We validate our approach through shape formation experiments and UAV simulations, demonstrating significant improvements in safety and effectiveness in complex multi-agent environments.

JBHI Journal 2025 Journal Article

Spherical Harmonics-Based Deep Learning Achieves Generalized and Accurate Diffusion Tensor Imaging

  • Yunwei Chen
  • Jialong Li
  • Qiqi Lu
  • Ye Wu
  • Xiaoming Liu
  • Yuanyuan Gao
  • Yanqiu Feng
  • Zhicheng Zhang

Diffusion tensor imaging (DTI) is a prevalent magnetic resonance imaging (MRI) technique, widely used in clinical and neuroscience research. However, the reliability of DTI is affected by the low signal-to-noise ratio inherent in diffusion-weighted (DW) images. Deep learning (DL) has shown promise in improving the quality of DTI, but its limited generalization to variable acquisition schemes hinders practical applications. This study aims to develop a generalized, accurate, and efficient DL-based DTI method. By leveraging the representation of voxel-wise diffusion MRI (dMRI) signals on the sphere using spherical harmonics (SH), we propose a novel approach that utilizes SH coefficient maps as input to a network for predicting the diffusion tensor (DT) field, enabling improved generalization. Extensive experiments were conducted on simulated and in-vivo datasets, covering various DTI application scenarios. The results demonstrate that the proposed SH-DTI method achieves advanced performance in both quantitative and qualitative analyses of DTI. Moreover, it exhibits remarkable generalization capabilities across different acquisition schemes, centers, and scanners, ensuring its broad applicability in diverse settings.

TAAS Journal 2024 Journal Article

A Game-Theoretical Self-Adaptation Framework for Securing Software-Intensive Systems

  • Nianyu Li
  • Mingyue Zhang
  • Jialong Li
  • Sridhar Adepu
  • Eunsuk Kang
  • Zhi Jin

Security attacks present unique challenges to the design of self-adaptation mechanism for software-intensive systems due to the adversarial nature of the environment. Game-theoretical approaches have been explored in security to model malicious behaviors and design reliable defense for the system in a mathematically grounded manner. However, modeling the system as a single player, as done in prior works, is insufficient for the system under partial compromise and for the design of fine-grained defensive policies where the rest of the system with autonomy can cooperate to mitigate the impact of attacks. To address such issues, we propose a new self-adaptation framework incorporating Bayesian game theory and model the defender (i.e., the system) at the granularity of components. Under security attacks, the architecture model of the system is automatically translated, by the proposed translation process with designed algorithms, into a multi-player Bayesian game. This representation allows each component to be modeled as an independent player, while security attacks are encoded as variant types for the components. By solving for pure equilibrium (i.e., adaptation response), the system’s optimal defensive strategy is dynamically computed, enhancing system resilience against security attacks by maximizing system utility. We validate the effectiveness of our framework through two sets of experiments using generic benchmark tasks tailored for the security domain. Additionally, we exemplify the practical application of our approach through a real-world implementation in the Secure Water Treatment System to demonstrate the applicability and potency in mitigating security risks.

TAAS Journal 2024 Journal Article

Generative AI for Self-Adaptive Systems: State of the Art and Research Roadmap

  • Jialong Li
  • Mingyue Zhang
  • Nianyu Li
  • Danny Weyns
  • Zhi Jin
  • Kenji Tei

Self-adaptive systems (SASs) are designed to handle changes and uncertainties through a feedback loop with four core functionalities: monitoring, analyzing, planning, and execution. Recently, generative artificial intelligence (GenAI), especially the area of large language models, has shown impressive performance in data comprehension and logical reasoning. These capabilities are highly aligned with the functionalities required in SASs, suggesting a strong potential to employ GenAI to enhance SASs. However, the specific benefits and challenges of employing GenAI in SASs remain unclear. Yet, providing a comprehensive understanding of these benefits and challenges is complex due to several reasons: limited publications in the SAS field, the technological and application diversity within SASs, and the rapid evolution of GenAI technologies. To that end, this article aims to provide researchers and practitioners a comprehensive snapshot that outlines the potential benefits and challenges of employing GenAI’s within SAS. Specifically, we gather, filter, and analyze literature from four distinct research fields and organize them into two main categories to potential benefits: (i) enhancements to the autonomy of SASs centered around the specific functions of the MAPE-K feedback loop, and (ii) improvements in the interaction between humans and SASs within human-on-the-loop settings. From our study, we outline a research roadmap that highlights the challenges of integrating GenAI into SASs. The roadmap starts with outlining key research challenges that need to be tackled to exploit the potential for applying GenAI in the field of SAS. The roadmap concludes with a practical reflection, elaborating on current shortcomings of GenAI and proposing possible mitigation strategies. †

TMLR Journal 2024 Journal Article

Large Language Models Synergize with Automated Machine Learning

  • Jinglue Xu
  • Jialong Li
  • Zhen Liu
  • NAV Suryanarayanan
  • Guoyuan Zhou
  • Jia Guo
  • Hitoshi Iba
  • Kenji Tei

Recently, program synthesis driven by large language models (LLMs) has become increasingly popular. However, program synthesis for machine learning (ML) tasks still poses significant challenges. This paper explores a novel form of program synthesis, targeting ML programs, by combining LLMs and automated machine learning (autoML). Specifically, our goal is to fully automate the generation and optimization of the code of the entire ML workflow, from data preparation to modeling and post-processing, utilizing only textual descriptions of the ML tasks. To manage the length and diversity of ML programs, we propose to break each ML program into smaller, manageable parts. Each part is generated separately by the LLM, with careful consideration of their compatibilities. To ensure compatibilities, we design a testing technique for ML programs. Unlike traditional program synthesis, which typically relies on binary evaluations (i.e., correct or incorrect), evaluating ML programs necessitates more than just binary judgments. Our approach automates the numerical evaluation and optimization of these programs, selecting the best candidates through autoML techniques. In experiments across various ML tasks, our method outperforms existing methods in 10 out of 12 tasks for generating ML programs. In addition, autoML significantly improves the performance of the generated ML programs. In experiments, given the textual task description, our method, Text-to-ML, generates the complete and optimized ML program in a fully autonomous process. The implementation of our method is available at https://github.com/JLX0/llm-automl.

AAMAS Conference 2024 Conference Paper

Memory-Based Resilient Control Against Non-cooperation in Multi-agent Flocking

  • Mingyue Zhang
  • Nianyu Li
  • Jialong Li
  • Jiachun Liao
  • Jiamou Liu

Inspired by natural flocking behaviors, researchers aim to develop a distributed control approach for artificial agents to mimic these behaviors. The main challenge lies in maintaining the resilience of the artificial flock, as some agents inevitably display non-cooperative behavior, thereby deviating from the flocking objective. Existing control approaches, especially those based on learning algorithm, are susceptible to forgetting issues that non-cooperative agents can exploit to disrupt the flock formation. To address this problem, this study introduces a memory-based resilient control approach that strategically analyzes historical data across three distinct time scales (long, short, and periodic). The implementation of a long short periodic-term memory (LSP) algorithm employs accumulative discounted credibility evaluated by Q-learning to recognize long-term non-cooperation, utilizes a filtering rule to establish a trusted set excluding short-term non-cooperation, and integrates fast Fourier transform to refine the trusted set against periodic inconsistency. We assess the effectiveness of this approach through extensive experiments. The results highlight the potential and advantages of using LSP in flocking, enhancing the resilience of multiagent flocking against complex non-cooperative threats.