Arrow Research search
Back to IJCAI

IJCAI 2024

Hacking Task Confounder in Meta-Learning

Conference Paper Machine Learning Artificial Intelligence

Abstract

Meta-learning enables rapid generalization to new tasks by learning knowledge from various tasks. It is intuitively assumed that as the training progresses, a model will acquire richer knowledge, leading to better generalization performance. However, our experiments reveal an unexpected result: there is negative knowledge transfer between tasks, affecting generalization performance. To explain this phenomenon, we conduct Structural Causal Models (SCMs) for causal analysis. Our investigation uncovers the presence of spurious correlations between task-specific causal factors and labels in meta-learning. Furthermore, the confounding factors differ across different batches. We refer to these confounding factors as ``Task Confounders". Based on these findings, we propose a plug-and-play Meta-learning Causal Representation Learner (MetaCRL) to eliminate task confounders. It encodes decoupled generating factors from multiple tasks and utilizes an invariant-based bi-level optimization mechanism to ensure their causality for meta-learning. Extensive experiments on various benchmark datasets demonstrate that our work achieves state-of-the-art (SOTA) performance. The code is provided in https: //github. com/WangJingyao07/MetaCRL.

Authors

Keywords

  • Computer Vision: CV: Transfer, low-shot, semi- and un- supervised learning
  • Machine Learning: ML: Causality
  • Machine Learning: ML: Few-shot learning
  • Machine Learning: ML: Meta-learning

Context

Venue
International Joint Conference on Artificial Intelligence
Archive span
1969-2025
Indexed papers
14525
Paper id
243276445696529945