Arrow Research search

Author name cluster

Meili Wang

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
2 author rows

Possible papers

4

AAAI Conference 2026 Conference Paper

Monocular Mesh Recovery and Body Measurement of Female Saanen Goats

  • Bo Jin
  • ShichaoZhao
  • Jin Lyu
  • Bin Zhang
  • Tao Yu
  • Liang An
  • Yebin Liu
  • Meili Wang

The lactation performance of Saanen dairy goats, renowned for their high milk yield, is intrinsically linked to their body size, making accurate 3D body measurement essential for assessing milk production potential, yet existing reconstruction methods lack goat-specific authentic 3D data. To address this limitation, we establish the FemaleSaanenGoat dataset containing synchronized eight-view RGBD videos of 55 female Saanen goats (6-18 months). Using multi-view DynamicFusion, we fuse noisy, non-rigid point cloud sequences into high-fidelity 3D scans, overcoming challenges from irregular surfaces and rapid movement. Based on these scans, we develop SaanenGoat, a parametric 3D shape model specifically designed for female Saanen goats. This model features a refined template with 41 skeletal joints and enhanced udder representation, registered with our scan data. A comprehensive shape space constructed from 48 goats enables precise representation of diverse individual variations. With the help of SaanenGoat model, we get high-precision 3D reconstruction from single-view RGBD input, and achieve automated measurement of six critical body dimensions: body length, height, chest width, chest girth, hip width, and hip height. Experimental results demonstrate the superior accuracy of our method in both 3D reconstruction and body measurement, presenting a novel paradigm for large-scale 3D vision applications in precision livestock farming.

IROS Conference 2025 Conference Paper

SheepDA-YOLO: Cross-Domain Adaptive Mean Teacher with Dual-Path Decoupling for Sheep Behavior Recognition

  • Xinjie Chen
  • Haotian Zhang
  • Yongyuan Qiao
  • Meili Wang

With the rapid advancement of smart farming towards large-scale livestock operations, the demand for model generalization in cross-pen behavior recognition has significantly increased. Traditional deep learning models suffer from substantial performance degradation due to variations in illumination and structure across different sheep pens, often necessitating the re-annotation of tens of thousands of frames for each new environment to mitigate domain shift issues. This severely limits the deployment of models in large-scale sheep farms. To achieve the goal of ’annotate once, generalize across pens, ’ we propose the SheepDA-YOLO framework, which innovatively integrates contrastive image translation and feature decoupling to address cross-domain adaptation challenges in agriculture. The core of our method consists of four parts: generating bidirectional pseudo-images for source and target domains based on CUT method to reduce image-level domain discrepancies through mixed training sets; employing a Mean Teacher architecture combined with a quadruple loss function to ensure stable knowledge transfer; proposing DP-DMAF module, which suppresses illumination interference and feature confusion through dual-path feature decoupling and separable large-kernel attention, complemented by a high-resolution detection layer to enhance small-target recognition accuracy. Experimental results demonstrate that SheepDA-YOLO achieves 89. 7% mAP in cross-domain testing on target sheep pens, outperforming state-of-the-art methods by 3. 4% and significantly reducing annotation costs. The study is the first to validate the feasibility of cross-pen adaptation, providing an efficient solution for the scalable implementation of smart livestock farming.

AAAI Conference 2024 Conference Paper

DHGCN: Dynamic Hop Graph Convolution Network for Self-Supervised Point Cloud Learning

  • Jincen Jiang
  • Lizhi Zhao
  • Xuequan Lu
  • Wei Hu
  • Imran Razzak
  • Meili Wang

Recent works attempt to extend Graph Convolution Networks (GCNs) to point clouds for classification and segmentation tasks. These works tend to sample and group points to create smaller point sets locally and mainly focus on extracting local features through GCNs, while ignoring the relationship between point sets. In this paper, we propose the Dynamic Hop Graph Convolution Network (DHGCN) for explicitly learning the contextual relationships between the voxelized point parts, which are treated as graph nodes. Motivated by the intuition that the contextual information between point parts lies in the pairwise adjacent relationship, which can be depicted by the hop distance of the graph quantitatively, we devise a novel self-supervised part-level hop distance reconstruction task and design a novel loss function accordingly to facilitate training. In addition, we propose the Hop Graph Attention (HGA), which takes the learned hop distance as input for producing attention weights to allow edge features to contribute distinctively in aggregation. Eventually, the proposed DHGCN is a plug-and-play module that is compatible with point-based backbone networks. Comprehensive experiments on different backbones and tasks demonstrate that our self-supervised method achieves state-of-the-art performance. Our source codes are available at: https://github.com/Jinec98/DHGCN.

NeurIPS Conference 2024 Conference Paper

PCoTTA: Continual Test-Time Adaptation for Multi-Task Point Cloud Understanding

  • Jincen Jiang
  • Qianyu Zhou
  • Yuhang Li
  • Xinkui Zhao
  • Meili Wang
  • Lizhuang Ma
  • Jian Chang
  • Jian J. Zhang

In this paper, we present PCoTTA, an innovative, pioneering framework for Continual Test-Time Adaptation (CoTTA) in multi-task point cloud understanding, enhancing the model's transferability towards the continually changing target domain. We introduce a multi-task setting for PCoTTA, which is practical and realistic, handling multiple tasks within one unified model during the continual adaptation. Our PCoTTA involves three key components: automatic prototype mixture (APM), Gaussian Splatted feature shifting (GSFS), and contrastive prototype repulsion (CPR). Firstly, APM is designed to automatically mix the source prototypes with the learnable prototypes with a similarity balancing factor, avoiding catastrophic forgetting. Then, GSFS dynamically shifts the testing sample toward the source domain, mitigating error accumulation in an online manner. In addition, CPR is proposed to pull the nearest learnable prototype close to the testing feature and push it away from other prototypes, making each prototype distinguishable during the adaptation. Experimental comparisons lead to a new benchmark, demonstrating PCoTTA's superiority in boosting the model's transferability towards the continually changing target domain. Our source code is available at: https: //github. com/Jinec98/PCoTTA.