Arrow Research search

Author name cluster

Li Xu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

31 papers
2 author rows

Possible papers

31

AAAI Conference 2026 Conference Paper

RealRep: Generalized SDR-to-HDR Conversion via Attribute-Disentangled Representation Learning

  • Li Xu
  • Siqi Wang
  • Kepeng Xu
  • Lin Zhang
  • Gang He
  • Weiran Wang
  • Yu-Wing Tai

High-Dynamic-Range Wide-Color-Gamut (HDR-WCG) technology is becoming increasingly widespread, driving a growing need for converting Standard Dynamic Range (SDR) content to HDR. Existing methods primarily rely on fixed tone mapping operators, which struggle to handle the diverse appearances and degradations commonly present in real-world SDR content. To address this limitation, we propose a generalized SDR-to-HDR framework that enhances robustness by learning attribute-disentangled representations. Central to our approach is Realistic Attribute-Disentangled Representation Learning (RealRep), which explicitly disentangles luminance and chrominance components to capture intrinsic content variations across different SDR distributions. Furthermore, we design a Luma-/Chroma-aware negative exemplar generation strategy that constructs degradation-sensitive contrastive pairs, effectively modeling tone discrepancies across SDR styles. Building on these attribute-level priors, we introduce the Degradation-Domain Aware Controlled Mapping Network (DDACMNet), a lightweight, two-stage framework that performs adaptive hierarchical mapping guided by a control-aware normalization mechanism. DDACMNet dynamically modulates the mapping process via degradation-conditioned features, enabling robust adaptation across diverse degradation domains. Extensive experiments demonstrate that RealRep consistently outperforms state-of-the-art methods in both generalization and perceptually faithful HDR color gamut reconstruction.

JBHI Journal 2025 Journal Article

A Novel Framework for Multimodal Brain Tumor Detection With Scarce Labels

  • Yanning Ge
  • Li Xu
  • Xiaoding Wang
  • Youxiong Que
  • Md. Jalil Piran

Brain tumor detection has advanced significantly with the development of deep learning technology. Although multimodal data, such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT), has potential advantages in diagnostics, most existing studies rely solely on a single modality. This is because common fusion methods may lead to the loss of critical information when attempting multimodal fusion. Therefore, effectively integrating multimodal data has become a significant challenge. Additionally, medical image analysis requires large amounts of annotated data, and labeling images is a resource-intensive task that demands experienced professionals to spend a considerable amount of time. To address these challenges, this paper introduces a new unsupervised learning framework named Double-SimCLR. This framework builds on the foundation of contrastive learning and features a dual-branch structure, enabling direct and simultaneous processing of MRI and CT images for multimodal feature fusion. Given the “weak feature” characteristics of CT images (e. g. , low soft tissue contrast and low resolution), we incorporated adaptive weight masking technology to enhance CT feature extraction. Moreover, we introduced a multimodal attention mechanism, which ensures that the model focuses on salient information, thereby elevating the precision and robustness of brain tumor detection. Even without substantial labeled data, experimental results demonstrate that Double-SimCLR achieves 93. 458% accuracy, 92. 463% precision, and a 93. 058% F1-score, outperforming state-of-the-art (SOTA) models by 2. 871%, 2. 643%, and 3. 098%, respectively.

JBHI Journal 2025 Journal Article

An Autonomous AI Framework for Knee Osteoarthritis Diagnosis via Semi-Supervised Learning and Dual Knowledge Distillation

  • Li Peng
  • Li Xu
  • Xiaoding Wang
  • Lizhao Wu
  • Jin Liu
  • Weiquan Zeng
  • Md. Jalil Piran

In the diagnosis of knee osteoarthritis, imaging analysis relies on accurate classification models to assess the severity of the disease. Traditional methods often require large amounts of labeled data, which is challenging in many developing countries, especially in resource-limited areas where the scarcity of labeled data becomes a bottleneck due to a lack of medical resources and qualified annotators. Privacy concerns also arise when using high-quality datasets from developed countries. This paper proposes a semi-supervised dual-knowledge distillation framework, PADistillation, that leverages autonomous AI to expand the reach of telemedicine and remote diagnostics while addressing data scarcity and privacy problems. To overcome the challenge of insufficient labeled data, the framework uses attention-guided distillation, employing high-attention pixels and channels to guide the student model's learning, thereby enhancing classification performance with limited labeled data. To ensure patient privacy during training, a personalized pixel shuffling method is proposed, dynamically determining the privacy protection priority of different regions by measuring the visual disorder of image areas. Through autonomous optimization and real-time decision making, PADistillation operates efficiently in resourceconstrained environments and supports telemedicine and remote diagnostic needs. Even with limited labeled data, the experimental results show that PADistillation achieves an accuracy rate of 88. 19%, a precision rate of 86. 28%, and an F1 score of 86. 94%. Compared with the mainstream semi-supervised methods, its accuracy rate is increased by more than 2%, the training efficiency is improved by 30%, and the privacy protection mechanism only leads to a performance loss of 1. 2%.

IJCAI Conference 2025 Conference Paper

Beyond Feature Mapping GAP: Integrating Real HDRTV Priors for Superior SDRTV-to-HDRTV Conversion

  • Gang He
  • Kepeng Xu
  • Li Xu
  • WenXin Yu
  • Xianyun Wu

The rise of HDR-WCG display devices has highlighted the need to convert SDRTV to HDRTV, as most video sources are still in SDR. Existing methods primarily focus on designing neural networks to learn a single-style mapping from SDRTV to HDRTV. However, the limited information in SDRTV and the diversity of styles in real-world conversions render this process an ill-posed problem, thereby constraining the performance and generalization of these methods. Inspired by generative approaches, we propose a novel method for SDRTV to HDRTV conversion guided by real HDRTV priors. Despite the limited information in SDRTV, introducing real HDRTV as reference priors significantly constrains the solution space of the originally high-dimensional ill-posed problem. This shift transforms the task from solving an unreferenced prediction problem to making a referenced selection, thereby markedly enhancing the accuracy and reliability of the conversion process. Specifically, our approach comprises two stages: the first stage employs a Vector Quantized Generative Adversarial Network to capture HDRTV priors, while the second stage matches these priors to the input SDRTV content to recover realistic HDRTV outputs. We evaluate our method on public datasets, demonstrating its effectiveness with significant improvements in both objective and subjective metrics across real and synthetic datasets.

IJCAI Conference 2025 Conference Paper

FedCPD: Personalized Federated Learning with Prototype-Enhanced Representation and Memory Distillation

  • Kaili Jin
  • Li Xu
  • Xiaoding Wang
  • Sun-Yuan Hsieh
  • Jie Wu
  • Limei Lin

Federated learning, as a distributed learning framework, aims to develop a global model while preserving client privacy. However, heterogeneity of client data leads to fairness issues and reduced performance. Techniques like parameter decoupling and prototype learning appear promising, yet challenges such as forgetting historical data and limited generalization persist. These methods also lack local insights, with locally trained features prone to overfitting, which affects generalization in global parameter aggregation. To address these challenges, we propose FedCPD, a personalized federated learning framework. FedCPD maintains historical information, reduces information loss, and increases personalization through hierarchical feature distillation and cross-layer feature fusion. Moreover, we utilize representation techniques like prototype contrastive learning and prototype alignment to capture diverse client data features, thus improving model generalization and fairness. Experiments show FedCPD outperforms state-of-the-art models, enhancing generalization by up to 10. 40% and personalization by up to 4. 90%, highlighting its effectiveness and superiority.

IJCAI Conference 2025 Conference Paper

FedHAN: A Cache-Based Semi-Asynchronous Federated Learning Framework Defending Against Poisoning Attacks in Heterogeneous Clients

  • Xiaoding Wang
  • Bin Ye
  • Li Xu
  • Lizhao Wu
  • Sun-Yuan Hsieh
  • Jie Wu
  • Limei Lin

Federated learning is vulnerable to model poisoning attacks in which malicious participants compromise the global model by altering the model updates. Current defense strategies are divided into three types: aggregation-based methods, validation dataset-based methods, and update distance-based methods. However, these techniques often neglect the challenges posed by device heterogeneity and asynchronous communication. Even upon identifying malicious clients, the global model may already be significantly damaged, requiring effective recovery strategies to reduce the attacker's impact. Current recovery methods, which are based on historical update records, are limited in environments with device heterogeneity and asynchronous communication. To address these problems, we introduce FedHAN, a reliable federated learning algorithm designed for asynchronous communication and device heterogeneity. FedHAN customizes sparse models, uses historical client updates to impute missing parameters in sparse updates, dynamically assigns adaptive weights, and combines update deviation detection with update prediction-based model recovery. Theoretical analysis indicates that FedHAN achieves favorable convergence despite unbounded staleness and effectively discriminates between benign and malicious clients. Experiments reveal that FedHAN, compared to leading methods, increases the accuracy of the model by 7. 86%, improves the detection accuracy of poisoning attacks by 12%, and enhances the recovery accuracy by 7. 26%. As evidenced by these results, FedHAN exhibits enhanced reliability and robustness in intricate and dynamic federated learning scenarios.

AAAI Conference 2025 Conference Paper

Manhattan Self-Attention Diffusion Residual Networks with Dynamic Bias Rectification for BCI-based Few-Shot Learning

  • Hao Wang
  • Li Xu
  • Yuntao Yu
  • Weiyue Ding
  • Yiming Xu

The distribution biases and scarcity of samples in multi-source data present significant challenges for few-shot learning (FSL) tasks based on brain-computer interface (BCI). Recent efforts have explored the application of diffusion mechanisms in FSL, typically utilizing labeled data to augment the support set. However, this approach has not effectively utilized unlabeled data nor addressed distribution biases. Inspired by the latest advancements in FSL, we propose the manhattan self-attention diffusion residual networks (MSADiff-Resnet) with dynamic bias rectification. This model explicitly adds the manhattan self-attention diffusion layer to resnet, using attention mechanisms and manhattan distance-based decay function to control local diffusion intensity, and adjusts the global diffusion strength through the parameter. This diffusion mechanism bridges labeled and unlabeled data, addressing the limitations associated with sample availability. Additionally, we effectively tackle the distribution biases of multi-source data through inter-class bias rectification and dynamic intra-class bias rectification. Moreover, this study presents for the first time a universal deep learning framework specifically designed for BCI-based FSL tasks. Extensive experiments on multi-source BCI task datasets have validated the effectiveness of proposed method.

IJCAI Conference 2025 Conference Paper

Unleashing the Potential of Transformer Flow for Photorealistic Face Restoration

  • Kepeng Xu
  • Li Xu
  • Gang He
  • Wei Chen
  • Xianyun Wu
  • WenXin Yu

Face restoration is a challenging task due to the need to remove artifacts and restore details. Traditional methods usually use generative model prior to achieve face restoration, but the restored results are still insufficient in terms of realism and details. In this paper, we introduce OmniFace, a novel face restoration framework that leverages Transformer-based diffusion flow. By exploiting the scaling property of Transformer, OmniFace achieves high-resolution restoration with exceptional realism and detail. The framework integrates three key components: (1) a Transformer-driven vector estimation network, (2) a representation aligned ControlNet, and (3) an adaptive training strategy for face restoration. The inherent scaling law of Transformer architectures enables the restoration of high-quality faces at high resolution. The controlnet combined with pre-trained diffusion representation can be easily trained. The adaptive training strategy provides a vector field that is more suitable for face restoration. Comprehensive experiments demonstrate that OmniFace outperforms existing techniques in terms of restoration quality across multiple benchmark datasets, especially in restoring photographic-level texture details in high-resolution scenes.

IJCAI Conference 2024 Conference Paper

Beyond Alignment: Blind Video Face Restoration via Parsing-Guided Temporal-Coherent Transformer

  • Kepeng Xu
  • Li Xu
  • Gang He
  • WenXin Yu
  • Yunsong Li

Multiple complex degradations are coupled in low-quality video faces in the real world. Therefore, blind video face restoration is a highly challenging ill-posed problem, requiring not only hallucinating high-fidelity details but also enhancing temporal coherence across diverse pose variations. Restoring each frame independently in a naive manner inevitably introduces temporal incoherence and artifacts from pose changes and keypoint localization errors. To address this, we propose the first blind video face restoration approach with a novel parsing-guided temporal-coherent transformer (PGTFormer) without pre-alignment. PGTFormer leverages semantic parsing guidance to select optimal face priors for generating temporally coherent artifact-free results. Specifically, we pre-train a temporal-spatial vector quantized auto-encoder on high-quality video face datasets to extract expressive context-rich priors. Then, the temporal parse-guided codebook predictor (TPCP) restores faces in different poses based on face parsing context cues without performing face pre-alignment. This strategy reduces artifacts and mitigates jitter caused by cumulative errors from face pre-alignment. Finally, the temporal fidelity regulator (TFR) enhances fidelity through temporal feature interaction and improves video temporal consistency. Extensive experiments on face videos show that our method outperforms previous face restoration baselines. The code will be released on https: //github. com/kepengxu/PGTFormer.

IROS Conference 2024 Conference Paper

Intention-Aware Planner for Robust and Safe Aerial Tracking

  • Qiuyu Ren
  • Huan Yu 0002
  • Jiajun Dai
  • Zhi Zheng
  • Jun Meng
  • Li Xu
  • Chao Xu 0001
  • Fei Gao 0011

Autonomous target tracking with quadrotors has wide applications in many scenarios, such as cinematographic follow-up shooting or suspect chasing. Target motion prediction is necessary when designing the tracking planner. However, the widely used constant velocity or constant rotation assumption can not fully capture the dynamics of the target. The tracker may fail when the target happens to move aggressively, such as sudden turn or deceleration. In this paper, we propose an intention-aware planner by additionally considering the intention of the target to enhance safety and robustness in aerial tracking applications. Firstly, a designated intention prediction method is proposed, which combines a user-defined potential assessment function and a state observation function. A reachable region is generated to speci cally evaluate the turning intentions. Then we design an intention-driven hybrid A* method to predict the future possible positions for the target. Finally, an intention-aware optimization approach is designed to generate a spatial-temporal optimal trajectory, allowing the tracker to perceive unexpected situations from the target. Benchmark comparisons and real-world experiments are conducted to validate the performance of our method.

NeurIPS Conference 2023 Conference Paper

Joint Attribute and Model Generalization Learning for Privacy-Preserving Action Recognition

  • Duo Peng
  • Li Xu
  • Qiuhong Ke
  • Ping Hu
  • Jun Liu

Privacy-Preserving Action Recognition (PPAR) aims to transform raw videos into anonymous ones to prevent privacy leakage while maintaining action clues, which is an increasingly important problem in intelligent vision applications. Despite recent efforts in this task, it is still challenging to deal with novel privacy attributes and novel privacy attack models that are unavailable during the training phase. In this paper, from the perspective of meta-learning (learning to learn), we propose a novel Meta Privacy-Preserving Action Recognition (MPPAR) framework to improve both generalization abilities above (i. e. , generalize to novel privacy attributes and novel privacy attack models ) in a unified manner. Concretely, we simulate train/test task shifts by constructing disjoint support/query sets w. r. t. privacy attributes or attack models. Then, a virtual training and testing scheme is applied based on support/query sets to provide feedback to optimize the model's learning toward better generalization. Extensive experiments demonstrate the effectiveness and generalization of the proposed framework compared to state-of-the-arts.

NeurIPS Conference 2022 Conference Paper

Heatmap Distribution Matching for Human Pose Estimation

  • Haoxuan Qu
  • Li Xu
  • Yujun Cai
  • Lin Geng Foo
  • Jun Liu

For tackling the task of 2D human pose estimation, the great majority of the recent methods regard this task as a heatmap estimation problem, and optimize the heatmap prediction using the Gaussian-smoothed heatmap as the optimization objective and using the pixel-wise loss (e. g. MSE) as the loss function. In this paper, we show that optimizing the heatmap prediction in such a way, the model performance of body joint localization, which is the intrinsic objective of this task, may not be consistently improved during the optimization process of the heatmap prediction. To address this problem, from a novel perspective, we propose to formulate the optimization of the heatmap prediction as a distribution matching problem between the predicted heatmap and the dot annotation of the body joint directly. By doing so, our proposed method does not need to construct the Gaussian-smoothed heatmap and can achieve a more consistent model performance improvement during the optimization of the heatmap prediction. We show the effectiveness of our proposed method through extensive experiments on the COCO dataset and the MPII dataset.

AAAI Conference 2022 Conference Paper

Transcoded Video Restoration by Temporal Spatial Auxiliary Network

  • Li Xu
  • Gang He
  • Jinjia Zhou
  • Jie Lei
  • Weiying Xie
  • Yunsong Li
  • Yu-Wing Tai

In most video platforms, such as Youtube, Kwai, and Tik- Tok, the played videos usually have undergone multiple video encodings such as hardware encoding by recording devices, software encoding by video editing apps, and single/multiple video transcoding by video application servers. Previous works in compressed video restoration typically assume the compression artifacts are caused by one-time encoding. Thus, the derived solution usually does not work very well in practice. In this paper, we propose a new method, temporal spatial auxiliary network (TSAN), for transcoded video restoration. Our method considers the unique traits between video encoding and transcoding, and we consider the initial shallow encoded videos as the intermediate labels to assist the network to conduct self-supervised attention training. In addition, we employ adjacent multi-frame information and propose the temporal deformable alignment and pyramidal spatial fusion for transcoded video restoration. The experimental results demonstrate that the performance of the proposed method is superior to that of the previous techniques. The code is available at https: //github. com/icecherylXuli/TSAN.

AAAI Conference 2021 Conference Paper

Temporal Segmentation of Fine-gained Semantic Action: A Motion-Centered Figure Skating Dataset

  • Shenglan Liu
  • Aibin Zhang
  • Yunheng Li
  • Jian Zhou
  • Li Xu
  • Zhuben Dong
  • Renhao Zhang

Temporal Action Segmentation (TAS) has achieved great success in many fields such as exercise rehabilitation, movie editing, etc. Currently, task-driven TAS is a central topic in human action analysis. However, motion-centered TAS, as an important topic, is little researched due to unavailable datasets. In order to explore more models and practical applications of motion-centered TAS, we introduce a Motion-Centered Figure Skating (MCFS) dataset in this paper. Compared with existing temporal action segmentation datasets, the MCFS dataset is fine-grained semantic, specialized and motion-centered. Besides, RGB-based and Skeletonbased features are provided in the MCFS dataset. Experimental results show that existing state-of-the-art methods are difficult to achieve excellent segmentation results (including accuracy, edit and F1 score) in the MCFS dataset. This indicates that MCFS is a challenging dataset for motioncentered TAS. The latest dataset can be downloaded at https: //shenglanliu. github. io/mcfs-dataset/.

AAAI Conference 2015 Conference Paper

On Vectorization of Deep Convolutional Neural Networks for Vision Tasks

  • Jimmy Ren
  • Li Xu

We recently have witnessed many ground-breaking results in machine learning and computer vision, generated by using deep convolutional neural networks (CNN). While the success mainly stems from the large volume of training data and the deep network architectures, the vector processing hardware (e. g. GPU) undisputedly plays a vital role in modern CNN implementations to support massive computation. Though much attention was paid in the extent literature to understand the algorithmic side of deep CNN, little research was dedicated to the vectorization for scaling up CNNs. In this paper, we studied the vectorization process of key building blocks in deep CNNs, in order to better understand and facilitate parallel implementation. Key steps in training and testing deep CNNs are abstracted as matrix and vector operators, upon which parallelism can be easily achieved. We developed and compared six implementations with various degrees of vectorization with which we illustrated the impact of vectorization on the speed of model training and testing. Besides, a unified CNN framework for both high-level and low-level vision tasks is provided, along with a vectorized Matlab implementation with state-of-the-art speed performance.

NeurIPS Conference 2015 Conference Paper

Shepard Convolutional Neural Networks

  • Jimmy Ren
  • Li Xu
  • Qiong Yan
  • Wenxiu Sun

Deep learning has recently been introduced to the field of low-level computer vision and image processing. Promising results have been obtained in a number of tasks including super-resolution, inpainting, deconvolution, filtering, etc. However, previously adopted neural network approaches such as convolutional neural networks and sparse auto-encoders are inherently with translation invariant operators. We found this property prevents the deep learning approaches from outperforming the state-of-the-art if the task itself requires translation variant interpolation (TVI). In this paper, we draw on Shepard interpolation and design Shepard Convolutional Neural Networks (ShCNN) which efficiently realizes end-to-end trainable TVI operators in the network. We show that by adding only a few feature maps in the new Shepard layers, the network is able to achieve stronger results than a much deeper architecture. Superior performance on both image inpainting and super-resolution is obtained where our system outperforms previous ones while keeping the running time competitive.

TAAS Journal 2014 Journal Article

Adaptive Epidemic Dynamics in Networks

  • Shouhuai Xu
  • Wenlian Lu
  • Li Xu
  • Zhenxin Zhan

Theoretical modeling of computer virus/worm epidemic dynamics is an important problem that has attracted many studies. However, most existing models are adapted from biological epidemic ones. Although biological epidemic models can certainly be adapted to capture some computer virus spreading scenarios (especially when the so-called homogeneity assumption holds), the problem of computer virus spreading is not well understood because it has many important perspectives that are not necessarily accommodated in the biological epidemic models. In this article, we initiate the study of such a perspective, namely that of adaptive defense against epidemic spreading in arbitrary networks. More specifically, we investigate a nonhomogeneous Susceptible-Infectious-Susceptible (SIS) model where the model parameters may vary with respect to time. In particular, we focus on two scenarios we call semi-adaptive defense and fully adaptive defense, which accommodate implicit and explicit dependency relationships between the model parameters, respectively. In the semi-adaptive defense scenario, the model’s input parameters are given; the defense is semi-adaptive because the adjustment is implicitly dependent upon the outcome of virus spreading. For this scenario, we present a set of sufficient conditions (some are more general or succinct than others) under which the virus spreading will die out; such sufficient conditions are also known as epidemic thresholds in the literature. In the fully adaptive defense scenario, some input parameters are not known (i.e., the aforementioned sufficient conditions are not applicable) but the defender can observe the outcome of virus spreading. For this scenario, we present adaptive control strategies under which the virus spreading will die out or will be contained to a desired level.

NeurIPS Conference 2014 Conference Paper

Deep Convolutional Neural Network for Image Deconvolution

  • Li Xu
  • Jimmy Ren
  • Ce Liu
  • Jiaya Jia

Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an deal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.

TAAS Journal 2012 Journal Article

Push- and pull-based epidemic spreading in networks

  • Shouhuai Xu
  • Wenlian Lu
  • Li Xu

Understanding the dynamics of computer virus (malware, worm) in cyberspace is an important problem that has attracted a fair amount of attention. Early investigations for this purpose adapted biological epidemic models, and thus inherited the so-called homogeneity assumption that each node is equally connected to others. Later studies relaxed this often unrealistic homogeneity assumption, but still focused on certain power-law networks. Recently, researchers investigated epidemic models in arbitrary networks (i.e., no restrictions on network topology). However, all these models only capture push-based infection, namely that an infectious node always actively attempts to infect its neighboring nodes. Very recently, the concept of pull-based infection was introduced but was not treated rigorously. Along this line of research, the present article investigates push- and pull-based epidemic spreading dynamics in arbitrary networks, using a nonlinear dynamical systems approach. The article advances the state-of-the-art as follows: (1) It presents a more general and powerful sufficient condition (also known as epidemic threshold in the literature) under which the spreading will become stable. (2) It gives both upper and lower bounds on the global mean infection rate, regardless of the stability of the spreading. (3) It offers insights into, among other things, the estimation of the global mean infection rate through localized monitoring of a small constant number of nodes, without knowing the values of the parameters.

ICRA Conference 2000 Conference Paper

Real-Time Motion Planning for Personal Robots Using Primitive Motions

  • Li Xu
  • Yuan F. Zheng

Real-time motion planning is one of the most challenging problems for successful employment of personal robots. A motion planning strategy via human commands is proposed. The human commands produce primitive motions which can form complex trajectories in real-time. Proper modifications to the primitive motions can generate trajectories which avoid obstacles. An automatic modification mechanism also can assist the robot to reach a target point quickly. Experimental results are presented to verify the effectiveness of the proposed scheme.

ICRA Conference 1999 Conference Paper

Reflexive Behavior of Personal Robots Using Primitive Motions

  • Li Xu
  • Yuan F. Zheng

Personal robotics is a new and attractive use of robotic technologies. We study one of its important topics-real-time motion planning. We propose to use primitive motions and their combination to make this possible. A reflexive motion control scheme is proposed to activate appropriate primitive motions for a desired complex motion. Experimental results are presented to show the effectiveness of the proposed scheme.