Arrow Research search

Author name cluster

Si Shi

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

JBHI Journal 2026 Journal Article

Value Decomposition-Based Multi-Agent Learning for Anesthetics Collaborative Control

  • Huijie Li
  • Yide Yu
  • Si Shi
  • Anmin Hu
  • Jian Huo
  • Wei Lin
  • Chaoran Wu
  • Wuman Luo

Automated control of personalized multiple anesthetics in clinical Total Intravenous Anesthesia (TIVA) is crucial yet challenging. Current systems, including target-controlled infusion (TCI) and closed-loop systems, either rely on relatively static pharmacokinetic/pharmacodynamic (PK/PD) models or focus on single anesthetic control. So they limit both personalization and collaborative control. To address these issues, we propose a novel V alue D ecomposition M ulti- A gent D eep R einforcement L earning (VD-MADRL) framework based on Markov Game (MG) for P ersonalized M ultiple A nesthetics C ontrol in a C losed- L oop system (PMAC-CL). VD-MADRL optimizes the collaboration between two anesthetics propofol (Agent I) and remifentanil (Agent II) by leveraging a MG to identify optimal actions among heterogeneous agents. We employ various value function decomposition methods to resolve the credit allocation problem and enhance collaborative control. We also introduce a multivariate environment model based on random forest (RF) for anesthesia state simulation. To ensure data validity, we design a data resampling and alignment technique to synchronize trajectory data from different devices, avoiding gradient explosion and maintaining conformity to Markov property. Extensive experiments on general and thoracic surgery datasets demonstrate that VD-MADRL provides more refined dose adjustments and maintains multiple anesthesia state indicators more stably at target levels compared to human experience. Especially, the best-performing algorithm, VDN in general surgery with online training, achieved a 16. 4% increase in cumulative reward (CR) and a 58. 0% reduction in mean MDPE compared to human experience. This demonstrates its great clinical value.

AAAI Conference 2025 Conference Paper

Subgraph Invariant Learning Towards Large-Scale Graph Node Classification

  • Leilei Wang
  • Si Shi
  • Fei Ma
  • Fei Richard Yu
  • Pengteng Li
  • Ying Tiffany He

Graph Neural Networks (GNNs) have shown efficacy in graph node classification, but face computational challenges on large-scale graphs. Although existing graph reduction methods address these issues, they still require high computational resources and fail to prioritize robust performance on out-of-distribution data. To tackle these challenges, we introduce the subgraph invariant learning paradigm, inspired by the small-world phenomenon. This approach enables models trained on specific subgraphs to generalize across diverse subgraphs, reducing computational demands, and enhancing scalability. To promote generalization, we maximize the invariance log-likelihood by deriving a theoretical lower bound of it and formulating the InVar loss. This loss minimizes the discrepancy between node representations and their corresponding invariance representations while maximizing the entropy of the node representation. In response to InVar loss, we propose the Invariance Facilitation Model (IFM), comprising the Invariance Representation Encoder (IRE) and Node Representation Encoder (NRE). IRE, capturing the invariance representations, utilizes Invariance ATTention (InvarATT) to compress long-range dependencies, while NRE learns the node representation, by integrating invariance representations via Telematic ATTention (TeleATT) and exchanging local information within each subgraph through GNNs. Evaluations on four large-scale graph datasets demonstrate the effectiveness, computational efficiency, and interpretability of IFM for large-scale graph node classification.

AAAI Conference 2024 Conference Paper

LGMRec: Local and Global Graph Learning for Multimodal Recommendation

  • Zhiqiang Guo
  • Jianjun Li
  • Guohui Li
  • Chaoyang Wang
  • Si Shi
  • Bin Ruan

The multimodal recommendation has gradually become the infrastructure of online media platforms, enabling them to provide personalized service to users through a joint modeling of user historical behaviors (e.g., purchases, clicks) and item various modalities (e.g., visual and textual). The majority of existing studies typically focus on utilizing modal features or modal-related graph structure to learn user local interests. Nevertheless, these approaches encounter two limitations: (1) Shared updates of user ID embeddings result in the consequential coupling between collaboration and multimodal signals; (2) Lack of exploration into robust global user interests to alleviate the sparse interaction problems faced by local interest modeling. To address these issues, we propose a novel Local and Global Graph Learning-guided Multimodal Recommender (LGMRec), which jointly models local and global user interests. Specifically, we present a local graph embedding module to independently learn collaborative-related and modality-related embeddings of users and items with local topological relations. Moreover, a global hypergraph embedding module is designed to capture global user and item embeddings by modeling insightful global dependency relations. The global embeddings acquired within the hypergraph embedding space can then be combined with two decoupled local embeddings to improve the accuracy and robustness of recommendations. Extensive experiments conducted on three benchmark datasets demonstrate the superiority of our LGMRec over various state-of-the-art recommendation baselines, showcasing its effectiveness in modeling both local and global user interests.