EAAI Journal 2026 Journal Article
Few-shot learning perfected: The efficacy and simplicity of Mate-baseline++
- Lianyang Zhou
- Haisong Huang
- Jianan Wei
As one of the most extensively researched few-shot methods, meta-learning has garnered significant attention. However, the augmentation in algorithmic complexity does not necessarily translate to commensurate improvements in few-shot accuracy. This paper introduces Moreover, these methods tend to involve complex modules that can compromise the embedding features learned by the backbone model. This paper presents a novel meta-learning baseline that addresses the issue of losing important semantic features during the basic feature extraction process. Unlike previous methods that rely on attention mechanisms, our proposed baseline enhances the extracted basic features without introducing any additional attention module to address these limitations, enhancing the original meta-baseline model. Unlike its predecessors, Meta-Baseline++ avoids the addition of complex modules, thereby preserving the original features while enhancing the extracted basic ones. Furthermore, we introduce the anchor-based classification loss Lanchor, which enables the network to learn sample features more effectively, thereby facilitating it rectifies the issue of semantic feature loss during feature extraction, a common problem in the original meta-baseline model, which employs a simple averaging approach. To facilitate more effective learning of sample features and better model parameter updates during training, we introduce an anchor-based classification loss, Lanchor. We evaluate our proposed method on miniImageNet, tieredImageNet, CUB-200-2011, and CIFAR-FS datasets and contrast our results with those of the previous Meta-baseline model. Our new baseline model achieves a remarkable accuracy improvement of 2. 26% and 1. 58% on miniImageNet and tieredImageNet datasets, respectively. Moreover, our model outperforms some complex meta-learning algorithms, achieving 80. 56% and 70. 43% accuracy on CUB-200-2011 and CIFAR-FS, respectively. Our findings set a new benchmark for few-shot baseline models and prompt a re-evaluation of some methods in few-shot learning.