Arrow Research search

Author name cluster

Changyi Lin

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

6 papers
1 author row

Possible papers

6

ICRA Conference 2025 Conference Paper

Agile Continuous Jumping in Discontinuous Terrains

  • Yuxiang Yang 0007
  • Guanya Shi
  • Changyi Lin
  • Xiangyun Meng
  • Rosario Scalise
  • Mateo Guaman Castro
  • Wenhao Yu 0003
  • Tingnan Zhang

We focus on agile, continuous, and terrain-adaptive jumping of quadrupedal robots in discontinuous terrains such as stairs and stepping stones. Unlike single-step jumping, continuous jumping requires accurately executing highly dynamic motions over long horizons, which is challenging for existing approaches. To accomplish this task, we design a hierarchical learning and control framework, which consists of a learned heightmap predictor for robust terrain perception, a reinforcement-learning-based centroidal-level motion policy for versatile and terrain-adaptive planning, and a low-level model-based leg controller for accurate motion tracking. In addition, we minimize the sim-to-real gap by accurately modeling the hardware characteristics. Our framework enables a Unitree Go1 robot to perform agile and continuous jumps on human-sized stairs and sparse stepping stones, for the first time to the best of our knowledge. In particular, the robot can cross two stair steps in each jump and completes a 3. 5m long, 2. 8m high, 14-step staircase in 4. 5 seconds. Moreover, the same policy outperforms baselines in various other parkour tasks, such as jumping over single horizontal or vertical discontinuities. Experiment videos can be found at https://yxyang.github.io/jumping_cod/.

IROS Conference 2025 Conference Paper

DTactive: A Vision-Based Tactile Sensor with Active Surface

  • Jikai Xu
  • Lei Wu
  • Changyi Lin
  • Ding Zhao
  • Huazhe Xu

The development of vision-based tactile sensors has significantly enhanced robots’ perception and manipulation capabilities, especially for tasks requiring contact-rich interactions with objects. In this work, we present DTactive, a novel vision-based tactile sensor with active surfaces. DTactive inherits and modifies the tactile 3D shape reconstruction method of DTact while integrating a mechanical transmission mechanism that facilitates the mobility of its surface. Thanks to this design, the sensor is capable of simultaneously performing tactile perception and in-hand manipulation with surface movement. Leveraging the high-resolution tactile images from the sensor and the magnetic encoder data from the transmission mechanism, we propose a learning-based method to enable precise angular trajectory control during in-hand manipulation. In our experiments, we successfully achieved accurate rolling manipulation within the range of [−180°, 180°] on various objects, with the root mean square error between the desired and actual angular trajectories being less than 12° on nine trained objects and less than 19° on three novel objects. The results demonstrate the potential of DTactive for in-hand object manipulation in terms of effectiveness, robustness and precision.

IROS Conference 2025 Conference Paper

QuietPaw: Learning Quadrupedal Locomotion with Versatile Noise Preference Alignment

  • Yuyou Zhang
  • Yihang Yao
  • Shiqi Liu 0005
  • Yaru Niu
  • Changyi Lin
  • Yuxiang Yang 0007
  • Wenhao Yu 0003
  • Tingnan Zhang

When operating at their full capacity, quadrupedal robots can produce loud footstep noise, which can be disruptive in human-centered environments like homes, offices, and hospitals. As a result, balancing locomotion performance with noise constraints is crucial for the successful real-world deployment of quadrupedal robots. However, achieving adaptive noise control is challenging due to (a) the trade-off between agility and noise minimization, (b) the need for generalization across diverse deployment conditions, and (c) the difficulty of effectively adjusting policies based on noise requirements. We propose QuietPaw, a framework incorporating our Conditional Noise-Constrained Policy (CNCP), a constrained learning-based algorithm that enables flexible, noise-aware locomotion by conditioning policy behavior on noise-reduction levels. We leverage value representation decomposition in the critics, disentangling state representations from condition-dependent representations and this allows a single versatile policy to generalize across noise levels without retraining while improving the Pareto trade-off between agility and noise reduction. We validate our approach in simulation and the real world, demonstrating that CNCP can effectively balance locomotion performance and noise constraints, achieving continuously adjustable noise reduction.

ICRA Conference 2024 Conference Paper

ArrayBot: Reinforcement Learning for Generalizable Distributed Manipulation through Touch

  • Zhengrong Xue
  • Han Zhang 0058
  • Jingwen Cheng
  • Zhengmao He
  • Yuanchen Ju
  • Changyi Lin
  • Gu Zhang
  • Huazhe Xu

We present ArrayBot, a distributed manipulation system consisting of a 16 × 16 array of vertically sliding pillars integrated with tactile sensors. Functionally, ArrayBot is designed to simultaneously support, perceive, and manipulate the tabletop objects. Towards generalizable distributed manipulation, we leverage reinforcement learning (RL) algorithms for the automatic discovery of control policies. In the face of the massively redundant actions, we propose to reshape the action space by considering the spatially local action patch and the low-frequency actions in the frequency domain. With this reshaped action space, we train RL agents that can relocate diverse objects through tactile observations only. Intriguingly, we find that the discovered policy can not only generalize to unseen object shapes in the simulator but also have the ability to transfer to the physical robot without any sim-to-real fine-tuning. |Leveraging the deployed policy, we derive more real-world manipulation skills on ArrayBot to further illustrate the distinctive merits of our proposed system.

IROS Conference 2024 Conference Paper

LocoMan: Advancing Versatile Quadrupedal Dexterity with Lightweight Loco-Manipulators

  • Changyi Lin
  • Xingyu Liu 0001
  • Yuxiang Yang 0007
  • Yaru Niu
  • Wenhao Yu 0003
  • Tingnan Zhang
  • Jie Tan 0001
  • Byron Boots

Quadrupedal robots have emerged as versatile agents capable of locomoting and manipulating in complex environments. Traditional designs typically rely on the robot’s inherent body parts or incorporate top-mounted arms for manipulation tasks. However, these configurations may limit the robot’s operational dexterity, efficiency, and adaptability, particularly in cluttered or constrained spaces. In this work, we present LocoMan, a dexterous quadrupedal robot with a novel morphology to perform versatile manipulation in diverse constrained environments. By equipping a Unitree Go1 robot with two low-cost and lightweight modular 3-DoF loco-manipulators on its front calves, LocoMan leverages the combined mobility and functionality of the legs and grippers for complex manipulation tasks that require precise 6D positioning of the end effector in a wide workspace. To harness the loco-manipulation capabilities of LocoMan, we introduce a unified control framework that extends the whole-body controller (WBC) to integrate the dynamics of loco-manipulators. Through experiments, we validate that the proposed whole-body controller can accurately and stably follow desired 6D trajectories of the end effector and torso, which, when combined with the large workspace from our design, facilitates a diverse set of challenging dexterous loco-manipulation tasks in confined spaces, such as opening doors, plugging into sockets, picking objects in narrow and low-lying spaces, and bimanual manipulation.

ICRA Conference 2023 Conference Paper

DTact: A Vision-Based Tactile Sensor that Measures High-Resolution 3D Geometry Directly from Darkness

  • Changyi Lin
  • Ziqi Lin
  • Shaoxiong Wang
  • Huazhe Xu

Vision-based tactile sensors that can measure 3D geometry of the contacting objects are crucial for robots to perform dexterous manipulation tasks. However, the existing sensors are usually complicated to fabricate and delicate to extend. In this work, we novelly take advantage of the reflection property of semitransparent elastomer to design a robust, low-cost, and easy-to-fabricate tactile sensor named DTact. DTact measures high-resolution 3D geometry accurately from the darkness shown in the captured tactile images with only a single image for calibration. In contrast to previous sensors, DTact is robust under various illumination conditions. Then, we build prototypes of DTact that have non-planar contact surfaces with minimal extra efforts and costs. Finally, we perform two intelligent robotic tasks including pose estimation and object recognition using DTact, in which DTact shows large potential in applications.