EAAI Journal 2026 Journal Article
Balance divergence for knowledge distillation
- Yafei Qi
- Chen Wang
- Zhaoning Zhang
- Yaping Liu
- Yongmin Zhang
Knowledge distillation (KD) represents a fundamental artificial intelligence (AI) technique for model compression and optimization. In computer vision AI applications, most KD methods use Kullback–Leibler (KL) divergence to align teacher–student output probabilities, but often neglect crucial negative aspects of teacher “dark knowledge” by underweighting low-probability signals. This limitation leads to suboptimal logit mimicry and unbalanced knowledge transfer to the student network. In this paper, we investigate the impact of this imbalance and propose a novel method, named Balance Divergence Distillation (BDD). By introducing a compensatory operation using reverse KL divergence, our method can improve the modeling of the extremely small values in the negative from the teacher and preserve the learning capacity for the positive. Furthermore, we test the impact of different temperature coefficients adjustments, which can lead to further balance in knowledge transfer. The evaluation results demonstrate that our method achieves accuracy improvements of 1 % ∼ 3 % for lightweight student networks over standard KD methods on both Canadian Institute for Advanced Research 100 classes(CIFAR-100) and ImageNet datasets. Additionally, when applied to semantic segmentation, our approach enhances the student by 4. 55% in mean Intersection over Union (mIoU) compared to the baseline on the Cityscapes dataset. These experiments confirm that our method provides a simple yet highly effective solution that can be seamlessly integrated with various KD frameworks across different vision tasks.