Arrow Research search

Author name cluster

Ziwei Lin

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

JBHI Journal 2026 Journal Article

MBE-UNet: Multi-Branch Boundary Enhanced U-Net for Ultrasound Segmentation

  • Qing Qin
  • Ziwei Lin
  • Guangyuan Gao
  • Chunxiao Han
  • Ruofan Wang
  • Yingmei Qin
  • Shanshan Li
  • Shan An

Accurately capturing object areas in medical images is crucial for the clinical diagnosis and treatment of diseases. Due to the inherent low contrast and blurry edges in ultrasound images, most existing CNN-based methods often yield unsatisfactory segmentation results, making ultrasound image segmentation a challenging task. This paper introduces a novel multi-branch boundary enhanced network (MBE-UNet) for automatic ultrasound image segmentation. This method can accurately segment targets and delineate boundaries simultaneously using a multi-branch network. First, a global pyramid attention module (GPAM) is designed to capture multi-scale contextual information. Second, we embed a boundary cascade module (BCM) in the main branch to ensure the network focuses on edge information flow and generates relatively desirable boundaries. Finally, a boundary feature fusion module (BFM) is used to integrate boundary and region information, obtaining a boundary enhanced region map. The visual results and quantitative analysis demonstrate that the proposed MBE-UNet outperforms classical segmentation networks on three publicly available ultrasound datasets.

NeurIPS Conference 2025 Conference Paper

From Indicators to Insights: Diversity-Optimized for Medical Series-Text Decoding via LLMs

  • Xiyuan Jin
  • Jing Wang
  • Ziwei Lin
  • QIANRU JIA
  • Yuqing Huang
  • Xiaojun Ning
  • Zhonghua Shi
  • Youfang Lin

Medical time-series analysis differs fundamentally from general ones by requiring specialized domain knowledge to interpret complex signals and clinical context. Large language models (LLMs) hold great promise for augmenting medical time-series analysis by complementing raw series with rich contextual knowledge drawn from biomedical literature and clinical guidelines. However, realizing this potential depends on precise and meaningful prompts that guide the LLM to key information. Yet, determining what constitutes effective prompt content remains non-trivial—especially in medical settings where signal interpretation often hinges on subtle, expert-defined decision-making indicators. To this end, we propose InDiGO, a knowledge-aware evolutionary learning framework that integrates clinical signals and decision-making indicators through iterative optimization. Across four medical benchmarks, InDiGO consistently outperforms prior methods. The code is available at: https: //github. com/jinxyBJTU/InDiGO.

AAAI Conference 2018 Conference Paper

Training and Evaluating Improved Dependency-Based Word Embeddings

  • Chen Li
  • Jianxin Li
  • Yangqiu Song
  • Ziwei Lin

Word embedding has been widely used in many natural language processing tasks. In this paper, we focus on learning word embeddings through selective higher-order relationships in sentences to improve the embeddings to be less sensitive to local context and more accurate in capturing semantic compositionality. We present a novel multi-order dependency-based strategy to composite and represent the context under several essential constraints. In order to realize selective learning from the word contexts, we automatically assign the strengths of different dependencies between co-occurred words in the stochastic gradient descent process. We evaluate and analyze our proposed approach using several direct and indirect tasks for word embeddings. Experimental results demonstrate that our embeddings are competitive to or better than state-of-the-art methods and significantly outperform other methods in terms of context stability. The output weights and representations of dependencies obtained in our embedding model conform to most of the linguistic characteristics and are valuable for many downstream tasks.