JBHI Journal 2026 Journal Article
Asymmetric Co-Training With Decoder–Head Decoupling for Semi-Supervised Medical Image Segmentation
- Yuxin Tian
- Muhan Shi
- Jianxun Li
- Bin Zhang
- Min Qu
- Yinxue Shi
- Xian Yang
- Min Wang
Semi-supervised learning reduces annotation costs in medical image segmentation by leveraging abundant unlabeled data alongside scarce labels. Most models adopt an encoder–decoder architecture with a task-specific segmentation head. While co-training is effective, existing frameworks suffer from intra-network coupling (decoder–head binding) and inter-network coupling (over-aligned predictions), which reduce prediction diversity and amplify confirmation bias–particularly for small structures, ambiguous boundaries, and anatomically variable regions. We propose AsyCo, an asymmetric co-training framework with two components. (1) Asymmetric Decoder Coupling implements decoder–head decoupling by dynamically remapping encoder–decoder features to non-default heads across branches, breaking intra-network coupling and creating diverse prediction paths without additional parameters. (2) Hierarchical Consistency Regularization converts this diversity into stable supervision by aligning (i) the two branches' final outputs along their default paths (branch-output consistency), (ii) predictions from different segmentation heads evaluated on identical decoder features (inter-head consistency), and (iii) intermediate encoder–decoder representations (representation consistency). Through these mechanisms, AsyCo explicitly mitigates both intra- and inter-network coupling, improving training stability and reducing confirmation bias. Extensive experiments on three clinical benchmarks under limited-label regimes demonstrate that AsyCo consistently outperforms nine state-of-the-art semi-supervised learning methods. These results indicate that AsyCo delivers accurate and reliable segmentation with minimal annotation, thereby enhancing the reliability of medical image analysis in real-world clinical practice.