Arrow Research search

Author name cluster

Qiang Nie

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

7 papers
2 author rows

Possible papers

7

AAAI Conference 2026 Conference Paper

EmbryoDiff: A Conditional Diffusion Framework with Multi-Focal Feature Fusion for Fine-Grained Embryo Developmental Stage Recognition

  • Yong Sun
  • Zhengjie Zhang
  • Junyu Shi
  • Zhiyuan Zhang
  • Lijiang Liu
  • Qiang Nie

Identification of fine-grained embryo developmental stages during In Vitro Fertilization (IVF) is crucial for assessing embryo viability. Although recent deep learning methods have achieved promising accuracy, existing discriminative models fail to utilize the distributional prior of embryonic development to improve accuracy. Moreover, their reliance on single-focal information leads to incomplete embryonic representations, making them susceptible to feature ambiguity under cell occlusions. To address these limitations, we propose EmbryoDiff, a two-stage diffusion-based framework that formulates the task as a conditional sequence denoising process. Specifically, we first train and freeze a frame-level encoder to extract robust multi-focal features. In the second stage, we introduce a Multi-Focal Feature Fusion Strategy that aggregates information across focal planes to construct a 3D-aware morphological representation, effectively alleviating ambiguities arising from cell occlusions. Building on this fused representation, we derive complementary semantic and boundary cues and design a Hybrid Semantic-Boundary Condition Block to inject them into the diffusion-based denoising process, enabling accurate embryonic stage classification. Extensive experiments on two benchmark datasets show that our method achieves state-of-the-art results. Notably, with only a single denoising step, our model obtains the best average test performance, reaching 82.8% and 81.3% accuracy on the two datasets, respectively.

IROS Conference 2025 Conference Paper

RMG: Real-Time Expressive Motion Generation with Self-collision Avoidance for 6-DOF Companion Robotic Arms

  • Jiansheng Li
  • Haotian Song
  • Haoang Li
  • Jinni Zhou
  • Qiang Nie
  • Yi Cai

The six-degree-of-freedom (6-DOF) robotic arm has gained widespread application in human-coexisting environments. While previous research has predominantly focused on functional motion generation, the critical aspect of expressive motion in human-robot interaction remains largely unexplored. This paper presents a novel real-time motion generation planner that enhances interactivity by creating expressive robotic motions between arbitrary start and end states within predefined time constraints. Our approach involves three key contributions: first, we develop a mapping algorithm to construct an expressive motion dataset derived from human dance movements; second, we train motion generation models in both Cartesian and joint spaces using this dataset; third, we introduce an optimization algorithm that guarantees smooth, collision-free motion while maintaining the intended expressive style. Experimental results demonstrate the effectiveness of our method, which can generate expressive and generalized motions in under 0. 5 seconds while satisfying all specified constraints.

AAAI Conference 2024 Conference Paper

Beyond Prototypes: Semantic Anchor Regularization for Better Representation Learning

  • Yanqi Ge
  • Qiang Nie
  • Ye Huang
  • Yong Liu
  • Chengjie Wang
  • Feng Zheng
  • Wen Li
  • Lixin Duan

One of the ultimate goals of representation learning is to achieve compactness within a class and well-separability between classes. Many outstanding metric-based and prototype-based methods following the Expectation-Maximization paradigm, have been proposed for this objective. However, they inevitably introduce biases into the learning process, particularly with long-tail distributed training data. In this paper, we reveal that the class prototype is not necessarily to be derived from training features and propose a novel perspective to use pre-defined class anchors serving as feature centroid to unidirectionally guide feature learning. However, the pre-defined anchors may have a large semantic distance from the pixel features, which prevents them from being directly applied. To address this issue and generate feature centroid independent from feature learning, a simple yet effective Semantic Anchor Regularization (SAR) is proposed. SAR ensures the inter-class separability of semantic anchors in the semantic space by employing a classifier-aware auxiliary cross-entropy loss during training via disentanglement learning. By pulling the learned features to these semantic anchors, several advantages can be attained: 1) the intra-class compactness and naturally inter-class separability, 2) induced bias or errors from feature learning can be avoided, and 3) robustness to the long-tailed problem. The proposed SAR can be used in a plug-and-play manner in the existing models. Extensive experiments demonstrate that the SAR performs better than previous sophisticated prototype-based methods. The implementation is available at https://github.com/geyanqi/SAR.

AAAI Conference 2024 Conference Paper

Unsupervised Continual Anomaly Detection with Contrastively-Learned Prompt

  • Jiaqi Liu
  • Kai Wu
  • Qiang Nie
  • Ying Chen
  • Bin-Bin Gao
  • Yong Liu
  • Jinbao Wang
  • Chengjie Wang

Unsupervised Anomaly Detection (UAD) with incremental training is crucial in industrial manufacturing, as unpredictable defects make obtaining sufficient labeled data infeasible. However, continual learning methods primarily rely on supervised annotations, while the application in UAD is limited due to the absence of supervision. Current UAD methods train separate models for different classes sequentially, leading to catastrophic forgetting and a heavy computational burden. To address this issue, we introduce a novel Unsupervised Continual Anomaly Detection framework called UCAD, which equips the UAD with continual learning capability through contrastively-learned prompts. In the proposed UCAD, we design a Continual Prompting Module (CPM) by utilizing a concise key-prompt-knowledge memory bank to guide task-invariant 'anomaly' model predictions using task-specific 'normal' knowledge. Moreover, Structure-based Contrastive Learning (SCL) is designed with the Segment Anything Model (SAM) to improve prompt learning and anomaly segmentation results. Specifically, by treating SAM's masks as structure, we draw features within the same mask closer and push others apart for general feature representations. We conduct comprehensive experiments and set the benchmark on unsupervised continual anomaly detection and segmentation, demonstrating that our method is significantly better than anomaly detection methods, even with rehearsal training. The code will be available at https://github.com/shirowalker/UCAD.

ICRA Conference 2023 Conference Paper

NeRF-Loc: Visual Localization with Conditional Neural Radiance Field

  • Jianlin Liu
  • Qiang Nie
  • Yong Liu 0032
  • Chengjie Wang 0001

We propose a novel visual re-localization method based on direct matching between the implicit 3D descriptors and the 2D image with transformer. A conditional neural radiance field(NeRF) is chosen as the 3D scene representation in our pipeline, which supports continuous 3D descriptors generation and neural rendering. By unifying the feature matching and the scene coordinate regression to the same framework, our model learns both generalizable knowledge and scene prior respectively during two training stages. Furthermore, to improve the localization robustness when domain gap exists between training and testing phases, we propose an appearance adaptation layer to explicitly align styles between the 3D model and the query image. Experiments show that our method achieves higher localization accuracy than other learning-based approaches on multiple benchmarks. Code is available at https://github.com/JenningsL/nerf-loc.

NeurIPS Conference 2022 Conference Paper

SoftPatch: Unsupervised Anomaly Detection with Noisy Data

  • Xi Jiang
  • Jianlin Liu
  • Jinbao Wang
  • Qiang Nie
  • Kai Wu
  • Yong Liu
  • Chengjie Wang
  • Feng Zheng

Although mainstream unsupervised anomaly detection (AD) algorithms perform well in academic datasets, their performance is limited in practical application due to the ideal experimental setting of clean training data. Training with noisy data is an inevitable problem in real-world anomaly detection but is seldom discussed. This paper considers label-level noise in image sensory anomaly detection for the first time. To solve this problem, we proposed a memory-based unsupervised AD method, SoftPatch, which efficiently denoises the data at the patch level. Noise discriminators are utilized to generate outlier scores for patch-level noise elimination before coreset construction. The scores are then stored in the memory bank to soften the anomaly detection boundary. Compared with existing methods, SoftPatch maintains a strong modeling ability of normal data and alleviates the overconfidence problem in coreset. Comprehensive experiments in various noise scenes demonstrate that SoftPatch outperforms the state-of-the-art AD methods on the MVTecAD and BTAD benchmarks and is comparable to those methods under the setting without noise.

IROS Conference 2021 Conference Paper

Development of a Vision-Based Robotic Manipulation System for Transferring of Oocytes

  • Shu Miao
  • Dayuan Chen
  • Qiang Nie
  • Xin Jiang 0001
  • Xulin Sun
  • Jianjun Dai
  • Yun-Hui Liu 0001
  • Xiang Li 0009

Embryos/oocytes vitrification is an essential cryopreservation technique in IVF (in vitro fertilization) clinics. The reliable and effective transferring of embryos/oocytes is crucial to the subsequent steps in the whole procedure of vitrification. After each transferring, the straw needs to be replaced with a new one. Due to the uncertainties in the fabrication and installation, the exact knowledge of the kinematic model of the straw is usually unknown, and the relationship between the microscope and the straw is also unknown without calibration beforehand. In such situation, automatically transferring the oocytes from micropipette to the narrow tip of straw (0. 7mm) is very challenging. In this paper, a new vision-guided robotic system is developed to automate the transferring of the oocyte without calibration. To this end, the unknown depth information is estimated then compensated by constructing a deep vision network through microscope image, and an approximate Jacobian control algorithm is also proposed to servo control the end tip of the uncalibrated straw to contact the micropipette with the vision feedback. After that, the oocyte is automatically transferred from the micropipette to the straw to finalize the task. The stability of the closed-loop control system is rigorously proved with Lyapunov methods, and the effectiveness of the developed robot is validated in experiments.