Arrow Research search

Author name cluster

Junyu Shi

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

AAAI Conference 2026 Conference Paper

EmbryoDiff: A Conditional Diffusion Framework with Multi-Focal Feature Fusion for Fine-Grained Embryo Developmental Stage Recognition

  • Yong Sun
  • Zhengjie Zhang
  • Junyu Shi
  • Zhiyuan Zhang
  • Lijiang Liu
  • Qiang Nie

Identification of fine-grained embryo developmental stages during In Vitro Fertilization (IVF) is crucial for assessing embryo viability. Although recent deep learning methods have achieved promising accuracy, existing discriminative models fail to utilize the distributional prior of embryonic development to improve accuracy. Moreover, their reliance on single-focal information leads to incomplete embryonic representations, making them susceptible to feature ambiguity under cell occlusions. To address these limitations, we propose EmbryoDiff, a two-stage diffusion-based framework that formulates the task as a conditional sequence denoising process. Specifically, we first train and freeze a frame-level encoder to extract robust multi-focal features. In the second stage, we introduce a Multi-Focal Feature Fusion Strategy that aggregates information across focal planes to construct a 3D-aware morphological representation, effectively alleviating ambiguities arising from cell occlusions. Building on this fused representation, we derive complementary semantic and boundary cues and design a Hybrid Semantic-Boundary Condition Block to inject them into the diffusion-based denoising process, enabling accurate embryonic stage classification. Extensive experiments on two benchmark datasets show that our method achieves state-of-the-art results. Notably, with only a single denoising step, our model obtains the best average test performance, reaching 82.8% and 81.3% accuracy on the two datasets, respectively.

JBHI Journal 2026 Journal Article

Morphology Prior Enhanced Teeth Segmentation for High-Resolution Oral Scans

  • Yuxian Jiang
  • Xiuying Wang
  • Tao Yang
  • Changkai Ji
  • Lanshan He
  • Yusheng Liu
  • Wei Wang
  • Min Liu

Deep learning methods have been proposed for tooth segmentation on high-resolution intra-oral scans (IOS) that plays a crucial role in clinical dental practice. However, they generally segment teeth in a low-resolution data with a fixed receptive field and generate final segmentation by up-sampling interpolation, and neglect teeth’s morphology priors: their similar dental arch structures and significantly different curvatures in different parts of each tooth. They thus lack adaptability to different parts of each tooth, and show less accurate segmentation of boundary points between teeth and gums due to the up-sampling computation. Further, cluttered poses of IOS limit their generalization and usability of teeth location and geometric information. To address these limitations, a morphology prior enhanced teeth segmentation framework is proposed in this paper. Firstly, a robust preprocessing is introduced to align poses of different IOS by computing their dental arch orientations, thereby improving segmentation generalization and usability of IOS geometric information. Secondly, a decomposition-merging strategy is designed to avoid the up-sampling limitation, which decomposes an IOS into multiple low-resolution data and merges their segmentation outcomes into a high-resolution result. Thirdly, an innovative module integrating semantic and geometric features is proposed to adaptively select deformable receptive fields. It geometrically samples within a variable probability space to construct receptive fields with varied graph relationships for different points, facilitating adaptive segmentation of different parts of each tooth. Experimental results on 6238 IOS from four centers demonstrate that our method significantly outperforms 11 state-of-the-art methods, achieving a 6. 93% enhancement for cross-center testing.

AAAI Conference 2025 Conference Paper

Improving Generalization of Universal Adversarial Perturbation via Dynamic Maximin Optimization

  • Yechao Zhang
  • Yingzhe Xu
  • Junyu Shi
  • Leo Yu Zhang
  • Shengshan Hu
  • Minghui Li
  • Yanjun Zhang

Deep neural networks (DNNs) are susceptible to universal adversarial perturbations (UAPs). These perturbations are meticulously designed to fool the target model universally across all sample classes. Unlike instance-specific adversarial examples (AEs), generating UAPs is more complex because they must be generalized across a wide range of data samples and models. Our research reveals that existing universal attack methods, which optimize UAPs using DNNs with static model parameter snapshots, do not fully leverage the potential of DNNs to generate more effective UAPs. Rather than optimizing UAPs against static DNN models with a fixed training set, we suggest using dynamic model-data pairs to generate UAPs. In particular, we introduce a dynamic maximin optimization strategy, aiming to optimize the UAP across a variety of optimal model-data pairs. We term this approach DM-UAP. DM-UAP utilizes an iterative max-min-min optimization framework that refines the model-data pairs, coupled with a curriculum UAP learning algorithm to examine the combined space of model parameters and data thoroughly. Comprehensive experiments on the ImageNet dataset demonstrate that the proposed DM-UAP markedly enhances both cross-sample universality and cross-model transferability of UAPs. Using only 500 samples for UAP generation, DM-UAP outperforms the state-of-the-art approach with an average increase in fooling ratio of 12.108%.