Arrow Research search

Author name cluster

Jianan Lin

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
2 author rows

Possible papers

5

AAAI Conference 2026 Conference Paper

FedAU2: Attribute Unlearning for User-Level Federated Recommender Systems with Adaptive and Robust Adversarial Training

  • Yuyuan Li
  • Junjie Fang
  • Fengyuan Yu
  • Xichun Sheng
  • Tianyu Du
  • Xuyang Teng
  • Shaowei Jiang
  • Linbo Jiang

Federated Recommender Systems (FedRecs) leverage federated learning to protect user privacy by retaining data locally. However, user embeddings in FedRecs often encode sensitive attribute information, rendering them vulnerable to attribute inference attacks. Attribute unlearning has emerged as a promising approach to mitigate this issue. In this paper, we focus on user-level FedRecs, which is a more practical yet challenging setting compared to group-level FedRecs. Adversarial training emerges as the most feasible approach within this context. We identify two key challenges in implementing adversarial training-based attribute unlearning for user-level FedRecs: i) mitigating training instability caused by user data heterogeneity, and ii) preventing attribute information leakage through gradients. To address these challenges, we propose FedAU2, an attribute unlearning method for user-level FedRecs. For CH1, we propose a adaptive adversarial training strategy, where the training dynamics are adjusted in response to local optimization behavior. For CH2, we propose a dual-stochastic variational autoencoder to perturb the adversarial model, effectively preventing gradient-based information leakage. Extensive experiments on three real-world datasets demonstrate that our proposed FedAU2 achieves superior performance in unlearning effectiveness and recommendation performance compared to existing baselines.

JAAMAS Journal 2026 Journal Article

Strategyproof facility location with prediction: minimizing the maximum cost

  • Hau Chan
  • Jianan Lin
  • Chenhao Wang

Abstract We study the mechanism design problem of facility location on a metric space in the learning-augmented framework, where mechanisms have access to imperfect predictions of the optimal facility locations. Our objective is to design strategyproof (SP) mechanisms that truthfully elicit agents’ preferences over facility locations and, using the given prediction, select a facility location that approximately minimizes the maximum cost among all agents. In particular, we seek SP mechanisms whose approximation guarantees depend on the prediction error: they should achieve improved performance when the prediction is accurate (the property of consistency ) while still ensuring strong worst-case guarantees when the prediction is arbitrarily inaccurate (the property of robustness ). On the real line, we characterize all deterministic SP mechanisms with consistency strictly better than 2 and bounded robustness for the maximum cost. We show that any such mechanism must coincide with the MinMaxP mechanism, which returns the prediction if it lies between the two extreme agent locations and otherwise returns the agent location closest to the prediction. For any prediction error \(\eta \ge 0\), we prove that MinMaxP achieves a \((1+\min (1, \eta ))\) -approximation and that no deterministic SP mechanism can obtain a better approximation ratio. In addition, for two-dimensional spaces with the \(\ell ^p\) distance, we analyze the approximation guarantees of a deterministic mechanism that applies MinMaxP independently on each coordinate, as well as a randomized mechanism that selects between two deterministic mechanisms with carefully chosen probabilities. We further extend these results to the \(L_p\) -norm social cost objective on the line metric and the maximum cost objective on the tree metric. Finally, we examine the group strategyproofness of the mechanisms.

AAAI Conference 2026 Conference Paper

TOFA: Training-Free One-Shot Federated Adaptation for Vision-Language Models

  • Li Zhang
  • Zhongxuan Han
  • XiaoHua Feng
  • Jiaming Zhang
  • Yuyuan Li
  • Linbo Jiang
  • Jianan Lin
  • Chaochao Chen

Efficient and lightweight adaptation of pre-trained Vision-Language Models (VLMs) to downstream tasks through collaborative interactions between local clients and a central server is a rapidly emerging research topic in federated learning. Existing adaptation algorithms are typically trained iteratively, which incur significant communication costs and increase the susceptibility to potential attacks. Motivated by the one-shot federated training techniques that reduce client-server exchanges to a single round, developing a lightweight one-shot federated VLM adaptation method to alleviate these issues is particularly attractive. However, current one-shot approaches face certain challenges in adapting VLMs within federated settings: (1) insufficient exploitation of the rich multimodal information inherent in VLMs; (2) lack of specialized adaptation strategies to systematically handle the severe data heterogeneity; and (3) requiring additional training resource of clients or server. To bridge these gaps, we propose a novel Training-free One-shot Federated Adaptation framework for VLMs, named TOFA. To fully leverage the generalizable multimodal features in pre-trained VLMs, TOFA employs both visual and textual pipelines to extract task-relevant representations. In the visual pipeline, a hierarchical Bayesian model learns personalized, class-specific prototype distributions. For the textual pipeline, TOFA evaluates and globally aligns the generated local text prompts for robustness. An adaptive weight calibration mechanism is also introduced to combine predictions from both modalities, balancing personalization and robustness to handle data heterogeneity. Our method is training-free, not relying on additional training resources on either the client or server side. Extensive experiments across 9 datasets in various federated settings demonstrate the effectiveness of the proposed TOFA method.

AAAI Conference 2025 Conference Paper

Mechanism Design for Connecting Regions Under Disruptions

  • Hau Chan
  • Jianan Lin
  • Zining Qin
  • Chenhao Wang

Man-made and natural disruptions such as planned constructions on roads, suspensions of bridges, and blocked roads by trees/mudslides/floods can often create obstacles that separate two connected regions. As a result, the traveling and reachability of agents from their respective regions to other regions can be affected. To minimize the impact of the obstacles and maintain agent accessibility, we initiate the problem of constructing a new pathway (e.g., a detour or new bridge) connecting the regions disconnected by obstacles from the mechanism design perspective. In the problem, each agent in their region has a private location and is required to access the other region. The cost of an agent is the distance from their location to the other region via the pathway. Our goal is to design strategyproof mechanisms that elicit truthful locations from the agents and approximately optimize the social or maximum cost of agents by determining locations in the regions for building a pathway. We provide a characterization of all strategyproof and anonymous mechanisms. For the social and maximum costs, we provide upper and lower bounds on the approximation ratios of strategyproof mechanisms.

ECAI Conference 2025 Conference Paper

Safe APG: Accelerated Policy Gradient Algorithm for Secure Policy Updating in Reinforcement Learning

  • Jianan Lin
  • Yao Chen 0003
  • Zhengyang Ji
  • Yuan Meng
  • Bo Hou
  • Shaolin Tan

Inverse reinforcement learning (IRL) aims to infer the reward function from expert demonstrations. However, as IRL techniques are increasingly applied in high-stakes domains such as autonomous driving and military decision-making, reward function leakage has emerged as a critical risk, potentially leading to severe security threats and unintended consequences. To address this challenge, we propose Safe Accelerated Policy Gradient (Safe APG), a method designed to enhance learning security of the demonstrating agent by preventing observers from inferring its reward function. The core idea behind Safe APG is to incorporate a delicately constructed and theoretically guaranteed structural noise into Nesterov’s Accelerated Gradient (NAG) for policy updating, with the goal of concealing critical gradient information from the learning agent as well as keeping the geometric convergence property of NAG. The results from numerical experiments and simulations in reinforcement learning environments demonstrate that the proposed method not only significantly mitigates reward function leakage, but also achieves superior convergence rates even under the perturbation of the introduced structural noise.