ICML 2025
MERGE3: Efficient Evolutionary Merging on Consumer-grade GPUs
Abstract
Evolutionary model merging enables the creation of high-performing multi-task models but remains computationally prohibitive for consumer hardware. We introduce MERGE$^3$, an efficient framework that makes evolutionary merging of Large Language Models (LLMs) feasible on a single GPU by reducing fitness computation costs 50$\times$ while retaining a large fraction of the original performance. MERGE$^3$ achieves this by E xtracting a reduced dataset for evaluation, E stimating model abilities using Item Response Theory (IRT), and E volving optimal merges via IRT-based performance estimators. Our method enables state-of-the-art multilingual and cross-lingual merging, transferring knowledge across languages with significantly lower computational overhead. We provide theoretical guarantees and an open-source library, democratizing high-quality model merging.
Authors
Keywords
Context
- Venue
- International Conference on Machine Learning
- Archive span
- 1993-2025
- Indexed papers
- 16471
- Paper id
- 193610670572956232