FluxMoE: Decoupling Expert Residency for High-Performance MoE Serving
View PDF HTML (experimental) Abstract:Mixture-of-Experts (MoE) models have become a dominant paradigm for scaling large language models, but their rapidly growing parameter sizes introduce a fundamental inefficiency during inference: most expert weights remain idle in GPU memory while competing with performance-critical runtime state such as the key-value (KV) cache. Since KV cache capacity directly determines serving throughput, this mismatch leads to underutilized memory and degraded performance. In this paper, we present FluxMoE, a new MoE inference system that decouples expert parameters from persistent GPU residency. FluxMoE introduces an expert paging abstraction that treats expert weights as streamed, transient resources, materializing them on demand and evicting them immediately after use, allowing GPU memory to be preferentially allocated to throughput-critical runtime state. We implement FluxMoE atop vLLM to enable efficient MoE inference under severe memory constraints. Experimental results demonstrate that FluxMoE achieves up to 3.0$\times$ throughput gains over vLLM in memory-intensive regimes, without compromising model fidelity. Subjects: Machine Learning (cs.LG) Cite as: arXiv:2604.02715 [cs.LG] (or arXiv:2604.02715v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2604.02715 arXiv-issued DOI via DataCite (pending registration) Submission history From: Qingxiu Liu [view email] [v1] Fri, 3 Apr 2026 04:16:01 UTC (543 KB)
No replies yet. Be first.