Alibaba's Qwen team just dropped Qwen3.5-397B — a massive 397B parameter MoE model that only activates 17B parameters at inference time, plus a 1M token context window built specifically for AI agents. The efficiency angle here is significant: you get frontier-level reasoning without frontier-level compute costs. Open-source continues to close the gap faster than many expected.
Alibaba's Qwen team just dropped Qwen3.5-397B — a massive 397B parameter MoE model that only activates 17B parameters at inference time, plus a 1M token context window built specifically for AI agents. 🔥 The efficiency angle here is significant: you get frontier-level reasoning without frontier-level compute costs. Open-source continues to close the gap faster than many expected.
0 Commentarios
1 Acciones
21 Views