Arrow Research search

Author name cluster

Siqi Chen

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

12 papers
2 author rows

Possible papers

12

NeurIPS Conference 2025 Conference Paper

BrainOmni: A Brain Foundation Model for Unified EEG and MEG Signals

  • Qinfan Xiao
  • Ziyun Cui
  • Chi Zhang
  • Siqi Chen
  • Wen Wu
  • Andrew Thwaites
  • Alexandra Woolgar
  • Bowen Zhou

Electroencephalography (EEG) and magnetoencephalography (MEG) measure neural activity non-invasively by capturing electromagnetic fields generated by dendritic currents. Although rooted in the same biophysics, EEG and MEG exhibit distinct signal patterns, further complicated by variations in sensor configurations across modalities and recording devices. Existing approaches typically rely on separate, modality- and dataset-specific models, which limits the performance and cross-domain scalability. This paper proposes BrainOmni, the first brain foundation model that generalises across heterogeneous EEG and MEG recordings. To unify diverse data sources, we introduce BrainTokenizer, the first tokeniser that quantises spatiotemporal brain activity into discrete representations. Central to BrainTokenizer is a novel Sensor Encoder that encodes sensor properties such as spatial layout, orientation, and type, enabling compatibility across devices and modalities. Building upon the discrete representations, BrainOmni learns unified semantic embeddings of brain signals by self-supervised pretraining. To the best of our knowledge, it is the first foundation model to support both EEG and MEG signals, as well as the first to incorporate large-scale MEG pretraining. A total of 1, 997 hours of EEG and 656 hours of MEG data are curated and standardised from publicly available sources for pretraining. Experiments show that BrainOmni outperforms both existing foundation models and state-of-the-art task-specific models on a range of downstream tasks. It also demonstrates strong generalisation to unseen EEG and MEG devices. Further analysis reveals that joint EEG-MEG (EMEG) training yields consistent improvements across both modalities. Code and checkpoints are publicly available at https: //github. com/OpenTSLab/BrainOmni

NeurIPS Conference 2025 Conference Paper

PAROAttention: Pattern-Aware ReOrdering for Efficient Sparse and Quantized Attention in Visual Generation Models

  • Tianchen Zhao
  • Ke Hong
  • Xinhao Yang
  • Xuefeng Xiao
  • Huixia Li
  • Feng Ling
  • Ruiqi Xie
  • Siqi Chen

In visual generation, the quadratic complexity of attention mechanisms results in high memory and computational costs, especially for longer token sequences required in high-resolution image or multi-frame video generation. To address this, prior research has explored techniques such as sparsification and quantization. However, these techniques face significant challenges under low density and reduced bitwidths. Through systematic analysis, we identify that the core difficulty stems from the dispersed and irregular characteristics of visual attention patterns. Therefore, instead of introducing specialized sparsification and quantization design to accommodate such patterns, we propose an alternative strategy: "reorganizing" the attention pattern to alleviate the challenges. Inspired by the local aggregatin nature of visual feature extraction, we design a novel P attern- A ware token R e O rdering ( PARO ) technique, which unifies the diverse attention patterns into a hardware-friendly block-wise pattern. This unification substantially simplifies and enhances both sparsification and quantization. We evaluate the performance-efficiency trade-offs of various design choices and finalize a methodology tailored for the unified pattern. Our approach, PAROAttention, achieves video and image generation with lossless metrics, and nearly identical results from full-precision (FP) baselines, while operating at notably lower density ( 20%-30% ) and bitwidth ( INT8/INT4 ), achieving a 1. 9 - 2. 7x end-to-end latency speedup.

AAAI Conference 2025 Conference Paper

pFedGPA: Diffusion-based Generative Parameter Aggregation for Personalized Federated Learning

  • Jiahao Lai
  • Jiaqi Li
  • Jian Xu
  • Yanru Wu
  • Boshi Tang
  • Siqi Chen
  • Yongfeng Huang
  • Wenbo Ding

Federated Learning (FL) offers a decentralized approach to model training, where data remains local and only model parameters are shared between the clients and the central server. Traditional methods, such as Federated Averaging (FedAvg), linearly aggregate these parameters which are usually trained on heterogeneous data distributions, potentially overlooking the complex, high-dimensional nature of the parameter space. This can result in degraded performance of the aggregated model. While personalized FL approaches can mitigate the heterogeneous data issue to some extent, the limitation of linear aggregation remains unresolved. To alleviate this issue, we investigate the generative approach of diffusion model and propose a novel generative parameter aggregation framework for personalized FL, pFedGPA. In this framework, we deploy a diffusion model on the server to integrate the diverse parameter distributions and propose a parameter inversion method to efficiently generate a set of personalized parameters for each client. This inversion method transforms the uploaded parameters into a latent code, which is then aggregated through denoising sampling to produce the final personalized parameters. By encoding the dependence of a client's model parameters on the specific data distribution using the high-capacity diffusion model, pFedGPA can effectively decouple the complexity of the overall distribution of all clients' model parameters from the complexity of each individual client's parameter distribution. Our experimental results consistently demonstrate the superior performance of the proposed method across multiple datasets, surpassing baseline approaches.

AAMAS Conference 2024 Conference Paper

ANOTO: Improving Automated Negotiation via Offline-to-Online Reinforcement Learning

  • Siqi Chen
  • Jianing Zhao
  • Kai Zhao
  • Gerhard Weiss
  • Fengyun Zhang
  • Ran Su
  • Yang Dong
  • Daqian Li

Automated negotiation is a crucial component for establishing cooperation and collaboration within multi-agent systems. While reinforcement learning (RL)-based negotiating agents have achieved remarkable success in various scenarios, they still face limitations due to certain assumptions on which they are based. In this work, we proposes a novel approach called ANOTO to improve the negotiating agents’ ability via offline-to-online RL. ANOTO enables a negotiating agent (1) to communicate with opponents using an end-to-end strategy that covers all negotiation actions, (2) to learn negotiation strategies from historical offline data without requiring active interactions, and (3) to enhance the optimization process during the online phase, facilitating rapid and stable performance improvements for the learned offline strategies. Experimental results, based on a number of negotiation scenarios and recent winning agents from the Automated Negotiating Agents Competitions (ANAC), are provided.

IJCAI Conference 2024 Conference Paper

Causality-enhanced Discreted Physics-informed Neural Networks for Predicting Evolutionary Equations

  • Ye Li
  • Siqi Chen
  • Bin Shan
  • Sheng-Jun Huang

Physics-informed neural networks (PINNs) have shown promising potential for solving partial differential equations (PDEs) using deep learning. However, PINNs face training difficulties for evolutionary PDEs, particularly for dynamical systems whose solutions exhibit multi-scale or turbulent behavior over time. The reason is that PINNs may violate the temporal causality property since all the temporal features in the PINNs loss are trained simultaneously. This paper proposes to use implicit time differencing schemes to enforce temporal causality, and use transfer learning to sequentially update the PINNs in space as surrogates for PDE solutions in different time frames. The evolving PINNs are better able to capture the varying complexities of the evolutionary equations, while only requiring minor updates between adjacent time frames. Our method is theoretically proven to be convergent if the time step is small and each PINN in different time frames is well-trained. In addition, we provide state-of-the-art (SOTA) numerical results for a variety of benchmarks for which existing PINNs formulations may fail or be inefficient. We demonstrate that the proposed method improves the accuracy of PINNs approximation for evolutionary PDEs and improves efficiency by a factor of 4–40x. The code is available at https: //github. com/SiqiChen9/TL-DPINNs.

AAMAS Conference 2023 Conference Paper

Transfer Learning based Agent for Automated Negotiation

  • Siqi Chen
  • Qisong Sun
  • Heng You
  • Tianpei Yang
  • Jianye Hao

Although great success has been made in automated negotiation, a major issue still stands out: it is inefficient that learning a policy from scratch when an agent encounters an unknown opponent. Transfer learning (TL) can alleviate this problem by utilizing the knowledge of previously learned policies to accelerate the current task learning. This work presents a novel Transfer Learningbased Negotiating Agent (TLNAgent) framework that allows an autonomous agent to transfer previous knowledge from source policies to help with new tasks, while boosting its performance. TL- NAgent comprises three key components: the negotiation module, the adaptation module and the transfer module. Specifically, the negotiation module is responsible for interacting with the other agent during negotiation. The adaptation module measures the helpfulness of each source policy based on a fusion of two selection mechanisms. The transfer module is based on lateral connections between source and target networks and accelerates the agent’s training by transferring knowledge from the selected source policy. Our comprehensive experiments clearly demonstrate that TL is effective in the context of automated negotiation, and TLNAgent outperforms state-of-the-art negotiating agents in various domains.

ICLR Conference 2023 Conference Paper

Video Scene Graph Generation from Single-Frame Weak Supervision

  • Siqi Chen
  • Jun Xiao 0001
  • Long Chen 0016

Video scene graph generation (VidSGG) aims to generate a sequence of graph-structure representations for the given video. However, all existing VidSGG methods are fully-supervised, i.e., they need dense and costly manual annotations. In this paper, we propose the first weakly-supervised VidSGG task with only single-frame weak supervision: SF-VidSGG. By ``weakly-supervised", we mean that SF-VidSGG relaxes the training supervision from two different levels: 1) It only provides single-frame annotations instead of all-frame annotations. 2) The single-frame ground-truth annotation is still a weak image SGG annotation, i.e., an unlocalized scene graph. To solve this new task, we also propose a novel Pseudo Label Assignment based method, dubbed as PLA. PLA is a two-stage method, which generates pseudo visual relation annotations for the given video at the first stage, and then trains a fully-supervised VidSGG model with these pseudo labels. Specifically, PLA consists of three modules: an object PLA module, a predicate PLA module, and a future predicate prediction (FPP) module. Firstly, in the object PLA, we localize all objects for every frame. Then, in the predicate PLA, we design two different teachers to assign pseudo predicate labels. Lastly, in the FPP module, we fusion these two predicate pseudo labels by the regularity of relation transition in videos. Extensive ablations and results on the benchmark Action Genome have demonstrated the effectiveness of our PLA.

AAMAS Conference 2019 Conference Paper

ONECG: Online Negotiation Environment for Coalitional Games

  • Siqi Chen
  • Yonghao Cui
  • Cong Shang
  • Jianye Hao
  • Gerhard Weiss

Coalitional games can be used to model a variety of problems in the real world. In coalitional game theory, how players form coalitions and divide payoffs is one fundamental issue to be answered. This demo presents an online negotiation environment for coalitional games (ONECG), in which coalitional negotiation can be conduced in a distributed way between people, agents, or in mixed settings via offer exchange and natural language communication. ONECG also allows con- figuration of specifications of coalitional games, and supports the rapid development of new negotiating agents through a set of well-defined APIs. This new environment is helpful to facilitate research on training human negotiation skills in coalitional games as well as the design of negotiation agents.

AAMAS Conference 2018 Conference Paper

SCC-rFMQ Learning in Cooperative Markov Games with Continuous Actions

  • Chengwei Zhang
  • Xiaohong Li
  • Jianye Hao
  • Siqi Chen
  • Karl Tuyls
  • Zhiyong Feng

Although many reinforcement learning methods have been proposed for learning the optimal solutions in single-agent continuous action domains, multiagent coordination domains with continuous action have received relatively few investigations. In this paper, we propose an independent learner hierarchical method, named Sample Continuous Coordination with recursive Frequency Maximum Q-Value (SCC-rFMQ), which divides the coordination problem into two layers. The first layer samples a finite set of actions from the continuous action spaces by a sampling mechanism with variable exploratory rates, and the second layer evaluates the actions in the sampled action set and updates the policy using a multiagent reinforcement learning coordination method. By constructing coordination mechanisms at both levels, SCCrFMQ can handle coordination problems in continuous action cooperative Markov games effectively. Experimental results show that SCC-rFMQ outperforms other reinforcement learning algorithms.

TAAS Journal 2014 Journal Article

An Intelligent Agent for Bilateral Negotiation with Unknown Opponents in Continuous-Time Domains

  • Siqi Chen
  • Gerhard Weiss

Automated negotiation among self-interested autonomous agents has gained tremendous attention due to the diversity of its broad range of potential real-world applications. This article deals with a prominent type of such negotiations, namely, multiissue negotiation that runs under continuous-time constraints and in which the negotiating agents have no prior knowledge about their opponents’ preferences and strategies. A negotiation strategy called Dragon is described that employs sparse pseudoinput Gaussian processes. Specifically, Dragon enables an agent (1) to precisely model the behavior of its opponents with comparably low computational load and (2) to make decisions effectively and adaptively in very complex negotiation settings. Extensive experimental results, based on a number of negotiation scenarios and state-of-the-art negotiating agents from Automated Negotiating Agents Competitions, are provided. Moreover, the robustness of our strategy is evaluated through both empirical game-theoretic and spatial evolutionary game-theoretic analysis.

IJCAI Conference 2013 Conference Paper

Conditional Restricted Boltzmann Machines for Negotiations in Highly Competitive and Complex Domains

  • Siqi Chen
  • Haitham Bou Ammar
  • Karl Tuyls
  • Gerhard Weiss

Learning in automated negotiations, while useful, is hard because of the indirect way the target function can be observed and the limited amount of experience available to learn from. This paper proposes two novel opponent modeling techniques based on deep learning methods. Moreover, to improve the learning efficacy of negotiating agents, the second approach is also capable of transferring knowledge efficiently between negotiation tasks. Transfer is conducted by automatically mapping the source knowledge to the target in a rich feature space. Experiments show that using these techniques the proposed strategies outperform existing state-of-the-art agents in highly competitive and complex negotiation domains. Furthermore, the empirical game theoretic analysis reveals the robustness of the proposed strategies.