Arrow Research search

Author name cluster

Peng Qin

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

EAAI Journal 2026 Journal Article

An enhanced segmentation network built upon the you only look once framework for precise weed recognition in early-stage cotton

  • Peng Qin
  • Jiajia Wang
  • Zhenhong Jia
  • Gang Zhou
  • Wei Chen

Effective weed management in cotton fields is essential for precision agriculture, where accurate segmentation technologies enable site-specific herbicide application. To facilitate early recognition and timely control of weeds, a self-constructed dataset was established from early-stage cotton fields containing 11 weed species. Considering the challenges posed by occlusion, boundary ambiguity, and the high cost of pixel-level annotations, an enhanced instance segmentation network (ESNet) was developed on the basis of the You-Only-Look-Once version 11 segmentation (YOLO11-seg) framework to improve segmentation performance, and an active learning strategy was further introduced to reduce annotation workload. Specifically, the network integrates the Dynamic-Ghost Enhanced C3k2 Module (C3k2_DG) for lightweight and diverse feature extraction, the Plant-Shaped Enhanced Convolution (PSEC) for downsampling with orientation- and scale-aware modeling, the Dual-Branch Progressive Attention Fusion (DBPAF) for progressive multi-level feature integration, and the Local Importance Attention (LIA) for boundary refinement. The experimental results demonstrated that the mean Intersection over Union (mIoU) was 0. 818, a Mask Precision was 0. 943, and mean Average Precision (mAP50) was 0. 917. Within the active learning framework, a strategy combining Bayesian Active Learning by Disagreement (BALD) uncertainty estimation, Core-Set–based diversity sampling, and class-balanced weighting was adopted to identify representative samples more efficiently. Using only half of the training data, this approach retained 97. 9 % of the mIoU and 98. 1 % of the mAP50 achieved with the full dataset. To further demonstrate its practical applicability, the network was also tested through deployment on mobile devices, validating its feasibility for agricultural perception terminals.

EAAI Journal 2026 Journal Article

Click-based interactive image segmentation with global hints and local corrections

  • Shengke Wang
  • Bajin Cai
  • Peng Qin
  • Xiandong Wang
  • Fengqin Yao
  • Meng Huang

The aim of click-based interactive image segmentation is to obtain pixel-level segmentation masks by only a small number of manual clicks. This approach streamlines the process of pixel-level annotation and image editing. Much research has focused on this area. In particular, SimpleClick, which utilizes Vision Transformers, has implemented a straightforward design that has demonstrated its effectiveness in interactive image segmentation. However, two issues remain: First, this simple design does not fully utilize the global guidance provided by the interaction map. Second, treating all clicks equally fails to maximize the benefits of the new clicks’ guidance in each iteration. To address these challenges, we propose a novel interactive image segmentation network called Glclick. Specifically, we introduce a Global Hint Module (GHM) that integrates global information from clicks into the transformer backbone. In addition, Glclick incorporates a Local Correction Module (LCM) that performs local optimization on the target masks generated by the backbone network. Extensive experiments on four generic datasets and three medical datasets demonstrate the superiority and generalizability of Glclick.

AAAI Conference 2026 Conference Paper

DynaQuant: Dynamic Mixed-Precision Quantization for Learned Image Compression

  • Youneng Bao
  • Yulong Cheng
  • Yiping Liu
  • Yichen Yang
  • Peng Qin
  • Mu Li
  • Yongsheng Liang

Prevailing quantization techniques in Learned Image Compression (LIC) typically employ a static, uniform bit-width across all layers, failing to adapt to the highly diverse data distributions and sensitivity characteristics inherent in LIC models. This leads to a suboptimal trade-off between performance and efficiency. In this paper, we introduce DynaQuant, a novel framework for dynamic mixed-precision quantization that operates on two complementary levels. First, we propose content-aware quantization, where learnable scaling and offset parameters dynamically adapt to the statistical variations of latent features. This fine-grained adaptation is trained end-to-end using a novel Distance-aware Gradient Modulator (DGM), which provides a more informative learning signal than the standard Straight-Through Estimator. Second, we introduce a data-driven, dynamic bit-width selector that learns to assign an optimal bit precision to each layer, dynamically reconfiguring the network's precision profile based on the input data. Our fully dynamic approach offers substantial flexibility in balancing rate-distortion (R-D) performance and computational cost. Experiments demonstrate that DynaQuant achieves R-D performance comparable to full-precision models while significantly reducing computational and storage requirements, thereby enabling the practical deployment of advanced LIC on diverse hardware platforms.