Arrow Research search

Author name cluster

Jinglin Li

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

11 papers
2 author rows

Possible papers

11

AAAI Conference 2026 Conference Paper

From Discriminative to Generative: A Diffusion-Based Paradigm for Multi-Agent Collaborative Perception

  • Kexin Gong
  • Puyi Yao
  • Guiyang Luo
  • Quan Yuan
  • Tiange Fu
  • Hui Zhang
  • Jinglin Li

Collaborative perception leveraging intermediate feature fusion has emerged as a leading paradigm to significantly enhance the environmental perception capabilities of autonomous driving systems. However, existing methods typically rely on discriminative supervision guided by downstream tasks. This paradigm compels models to learn minimal, task-specific representations, which conflicts with the goal of cooperative perception to capture comprehensive information, thereby limiting generalization. To address this issue, we propose DiGS-CP, a novel two-stage generative supervised collaborative perception framework. Specifically, we introduce a diffusion-based generative task that conditions on fused object-level features to generate representations of object-level point clouds. The proposed generative supervision provides fine-grained, task-agnostic signals that encourages the fusion module to learn comprehensive representations beyond task-specific requirements. By preserving and integrating complementary information from collaborative agents, our approach overcomes the limitations of task-specific learning and enhances the generalizability of the learned features. Furthermore, our two-stage architecture requires agents to transmit only object-level features, significantly reducing communication overhead. Extensive experiments on three benchmark datasets demonstrate that DiGS-CP achieves state-of-the-art performance in 3D object detection, while maintaining low bandwidth requirements and exhibiting excellent generalization ability.

NeurIPS Conference 2025 Conference Paper

NegoCollab: A Common Representation Negotiation Approach for Heterogeneous Collaborative Perception

  • CONGZHANG SHAO
  • Quan Yuan
  • Guiyang Luo
  • Yue Hu
  • Danni Wang
  • Liu Yilin
  • Rui Pan
  • Bo Chen

Collaborative perception improves task performance by expanding the perception range through information sharing among agents. Immutable heterogeneity poses a significant challenge in collaborative perception, as participating agents may employ different and fixed perception models. This leads to domain gaps in the intermediate features shared among agents, consequently degrading collaborative performance. Aligning the features of all agents to a common representation can eliminate domain gaps with low training cost. However, in existing methods, the common representation is designated as the representation of a specific agent, making it difficult for agents with significant domain discrepancies from this specific agent to achieve proper alignment. This paper proposes NegoCollab, a heterogeneous collaboration method based on the negotiated common representation. It introduces a negotiator during training to derive the common representation from the local representations of each modality's agent, effectively reducing the inherent domain gap with the various local representations. In NegoCollab, the mutual transformation of features between the local representation space and the common representation space is achieved by a pair of sender and receiver. To better align local representations to the common representation containing multimodal information, we introduce structural alignment loss and pragmatic alignment loss in addition to the distribution alignment loss to supervise the training. This enables the knowledge in the common representation to be fully distilled into the sender. The experimental results demonstrate that NegoCollab significantly outperforms existing methods in common representation-based collaboration approaches. The mechanism of obtaining common representations through negotiation provides a more reliable and flexible option for common representations in heterogeneous collaborative perception.

AAAI Conference 2023 Conference Paper

AlphaRoute: Large-Scale Coordinated Route Planning via Monte Carlo Tree Search

  • Guiyang Luo
  • Yantao Wang
  • Hui Zhang
  • Quan Yuan
  • Jinglin Li

This paper proposes AlphaRoute, an AlphaGo inspired algorithm for coordinating large-scale routes, built upon graph attention reinforcement learning and Monte Carlo Tree Search (MCTS). We first partition the road network into regions and model large-scale coordinated route planning as a Markov game, where each partitioned region is treated as a player instead of each driver. Then, AlphaRoute applies a bilevel optimization framework, consisting of several region planners and a global planner, where the region planner coordinates the route choices for vehicles located in the region and generates several strategies, and the global planner evaluates the combination of strategies. AlphaRoute is built on graph attention network for evaluating each state and MCTS algorithm for dynamically visiting and simulating the future state for narrowing down the search space. AlphaRoute is capable of 1) bridging user fairness and system efficiency, 2) achieving higher search efficiency by alleviating the curse of dimensionality problems, and 3) making an effective and informed route planning by simulating over the future to capture traffic dynamics. Comprehensive experiments are conducted on two real-world road networks as compared with several baselines to evaluate the performance, and results show that AlphaRoute achieves the lowest travel time, and is efficient and effective for coordinating large-scale routes and alleviating the traffic congestion problem. The code will be publicly available.

IJCAI Conference 2023 Conference Paper

GPLight: Grouped Multi-agent Reinforcement Learning for Large-scale Traffic Signal Control

  • Yilin Liu
  • Guiyang Luo
  • Quan Yuan
  • Jinglin Li
  • Lei Jin
  • Bo Chen
  • Rui Pan

The use of multi-agent reinforcement learning (MARL) methods in coordinating traffic lights (CTL) has become increasingly popular, treating each intersection as an agent. However, existing MARL approaches either treat each agent absolutely homogeneous, i. e. , same network and parameter for each agent, or treat each agent completely heterogeneous, i. e. , different networks and parameters for each agent. This creates a difficult balance between accuracy and complexity, especially in large-scale CTL. To address this challenge, we propose a grouped MARL method named GPLight. We first mine the similarity between agent environment considering both real-time traffic flow and static fine-grained road topology. Then we propose two loss functions to maintain a learnable and dynamic clustering, one that uses mutual information estimation for better stability, and the other that maximizes separability between groups. Finally, GPLight enforces the agents in a group to share the same network and parameters. This approach reduces complexity by promoting cooperation within the same group of agents while reflecting differences between groups to ensure accuracy. To verify the effectiveness of our method, we conduct experiments on both synthetic and real-world datasets, with up to 1, 089 intersections. Compared with state-of-the-art methods, experiment results demonstrate the superiority of our proposed method, especially in large-scale CTL.

ICRA Conference 2014 Conference Paper

Task-constrained continuum manipulation in cluttered space

  • Jinglin Li
  • Jing Xiao 0001

Continuum manipulators do not contain rigid links and can deform continuously to perform a whole arm manipulation. Hence, they are much more flexible than articulated manipulators to perform tasks in cluttered space. However, autonomous manipulation constrained by tasks other than grasping has not been studied for continuum manipulators. In this paper, we introduce a general and efficient approach for autonomous continuum manipulation under task constraints. We consider a spatial continuum manipulator with multiple uniform-curvature sections if not deformed. We further apply the approach to an example of inspection task in a cluttered environment to verify its effectiveness. The high-efficiency of our approach makes it suitable to run on-line for guiding task-constrained manipulation in real-time.

IROS Conference 2013 Conference Paper

Autonomous continuum grasping

  • Jinglin Li
  • Zhou Teng
  • Jing Xiao 0001
  • Apoorva Kapadia
  • Alan Bartow
  • Ian D. Walker

A continuum manipulator, such as a multi-section trunk/tentacle robot, is promising for deft manipulation of a wide range of objects of different shapes and sizes. Given an object, a continuum manipulator tries to grasp it by wrapping tightly around it. Autonomous grasping requires realtime determination of whether an object can be grasped after it is identified, and if so, the feasible whole-arm wrapping around configurations of the robot to grasp it, which we call grasping configurations, as well as the path leading to a grasping configuration. In this paper, we describe the process for autonomous grasping from object detection to executing the grasping motion and achieving force-closure grasps, with a focus on a general analysis of all possible types of planar grasping configurations of a three-section continuum manipulator. We further provide conditions for existence of solutions and describe how to find a valid grasping configuration and the associated path automatically if one exists. Experimental results with the OctArm manipulator validate our approach, and shows that the entire process to determine an autonomous grasping operation, which includes automatic detection of the target object and determination of a grasping configuration and a path to the grasping configuration that avoids obstacles, can take just a small fraction of a second. Once a grasping configuration is reached, the manipulator can lift the object stably, i. e. , a force-closure grasp can be achieved.

ICRA Conference 2013 Conference Paper

Progressive generation of force-closure grasps for an n-section continuum manipulator

  • Jinglin Li
  • Jing Xiao 0001

A continuum manipulator, such as a multi-section trunk/tentacle robot, is promising for deft manipulation of a wide range of objects under uncertain conditions in less-structured and cluttered environments. With whole arm grasping, it is adaptive to objects of different sizes and shapes. Previously, we introduced a method for automatically computing grasping configurations of a continuum manipulator with three constant-curvature sections was introduced based on minimum bounding circles of object cross-sections. However, using minimum bounding circles (or circumcircles if they exist) alone may not result in tight and stable grasps. In this paper, we introduce an approach for progressively generating tight grasping configurations section by section to achieve a tight and force-closure whole arm grasp. This approach directly applies to n-section continuum manipulators and generates a force-closure grasping configuration efficiently without requiring minimum bounding circles of a target object.

IROS Conference 2013 Conference Paper

Progressive, continuum grasping in cluttered space

  • Jinglin Li
  • Jing Xiao 0001

Continuum manipulators, inspired by invertebrate structures in nature, such as octopus arms and elephant trunks, do not contain rigid links, can deform, and are passively compliant, which make them particularly flexible for manipulation in cluttered space. A key open issue here is how to make such a manipulator autonomously grasp an object in cluttered space, especially if the object cannot be completely seen or known before being grasped. In this paper, we address this issue by introducing an approach that enables a multi-section continuum manipulator to probe an object with its tip while gradually form a whole-arm, force-closure grasp by following closely the contour of the probed object. This real-time approach is both effective and efficient for grasping an object in a cluttered space, as evident from the test examples.

ICRA Conference 2012 Conference Paper

Exact and efficient Collision Detection for a multi-section Continuum Manipulator

  • Jinglin Li
  • Jing Xiao 0001

Continuum manipulators, featuring “continuous backbone structures”, are promising for deft manipulation of a wide range of objects under uncertain conditions in less-structured and cluttered environments. A multi-section trunk/tentacle robot is such a continuum manipulator. With a continuum robot, manipulation means a continuous whole arm motion, where the arm is often bent into a continuously deforming concave shape. To approximate such an arm with a polygonal mesh for collision detection is expensive not only because a fine mesh is required to approximate concavity but also because each time the manipulator deforms, a new mesh has to be built for the new configuration. However, most generic collision detection algorithms apply to only polygonal meshes or objects of convex primitives. In this paper, we propose an efficient algorithm for Collision Detection between an Exact Continuum Manipulator (CD-ECoM) and its environments, which is applicable to any continuum manipulator featuring multiple constant-curvature sections. Our test results show that using this algorithm is both accurate and more efficient in both time and space to detect collisions than approximating the continuum manipulator as polygonal meshes and applying an existing generic collision detection algorithm. The algorithm is essential for path/trajectory planning of continuum manipulators.

IROS Conference 2011 Conference Paper

Determining "grasping" configurations for a spatial continuum manipulator

  • Jinglin Li
  • Jing Xiao 0001

Unlike a conventional articulated manipulator, where only the gripper manipulates objects, a continuum manipulator, such as a multi-section trunk/tentacle robot, is promising for deft manipulation of a wide range of objects of different shapes and sizes. Given an object, a continuum manipulator tries to grasp it by wrapping around and squeezing it. A main open problem is how to determine if the object can be grasped and if so, the whole-arm wrapping around configurations of the robot to grasp it, which we call grasping configurations. In this paper, we provide a general and complete analysis of grasping configurations of a spatial continuum manipulator consisting of three constant-curvature sections, for any given 3-D object. We formulate conditions for existence of solutions and describe how to determine valid grasping configurations. Our method can extend to general continuum manipulators of n constant-curvature sections (where n ≥ 3).