Arrow Research search
Back to NeurIPS

NeurIPS 2025

Domain-Specific Pruning of Large Mixture-of-Experts Models with Few-shot Demonstrations

Conference Paper Main Conference Track Artificial Intelligence ยท Machine Learning

Abstract

Mixture-of-Experts (MoE) models achieve a favorable trade-off between performance and inference efficiency by activating only a subset of experts. However, the memory overhead of storing all experts remains a major limitation, especially in large-scale MoE models such as DeepSeek-R1 (671B). In this study, we investigate domain specialization and expert redundancy in large-scale MoE models and uncover a consistent behavior we term~\emph{few-shot expert localization}, with only a few in-domain demonstrations, the model consistently activates a sparse and stable subset of experts on tasks within the same domain. Building on this observation, we propose a simple yet effective pruning framework, \textbf{EASY-EP}, that leverages a few domain-specific demonstrations to identify and retain only the most relevant experts. EASY-EP comprises two key components: \textbf{output-aware expert importance assessment} and \textbf{expert-level token contribution estimation}. The former evaluates the importance of each expert for the current token by considering the gating scores and L2 norm of the outputs of activated experts, while the latter assesses the contribution of tokens based on representation similarities before and after routed experts. Experiments on DeepSeek-R1 and DeepSeek-V3-0324 show that our method can achieve comparable performances and $2. 99\times$ throughput under the same memory budget as the full model, with only half the experts. Our code is available at https: //github. com/RUCAIBox/EASYEP.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
Annual Conference on Neural Information Processing Systems
Archive span
1987-2025
Indexed papers
30776
Paper id
752946597252726548