← Back to feed
AI

Alibaba open-sources Qwen3.6-35B-A3B: 35B MoE, 3B active params, runs on a laptop, outdraws Claude Opus 4.7

Alibaba's Qwen team releases Qwen3.6-35B-A3B as fully open-source on HuggingFace (Apache license). The model uses a Mixture-of-Experts architecture with 35B total parameters but only 3B active per token — making it runnable on consumer hardware. Simon Willison's post 'Qwen3.6-35B-A3B on my laptop drew me a better pelican than Claude Opus 4.7' lands 404 HN pts and 84 comments, while the original release thread hits 100+ on r/LocalLLaMA. Pitched as 'agentic coding power, now open to all.'

alibabaqwenopen-source-llmmixture-of-expertslocal-llmcoding-agent

Why it matters

A laptop-runnable 35B model that can compete with a frontier closed model on creative tasks is a significant efficiency milestone. Open-sourcing it under Apache license means the practitioner community can fine-tune, quantize, and deploy without API dependency. The simonw comparison directly benchmarks it against Anthropic's latest Opus, increasing signal that the capability gap between open and closed frontier models is shrinking.

Impact scorecard

7.28/10
Stakes
7.0
Novelty
8.0
Authority
7.0
Coverage
6.0
Concreteness
8.0
Social
7.0
FUD risk
2.0
Coverage9 outlets · 2 tier-1
HuggingFace, HN, Reddit/LocalLLaMA, simonwillison.net, Alibaba_Qwen on X
X / Twitter1,400 mentions
@Alibaba_Qwen · 900 likes
Reddit1,100 upvotes
r/LocalLLaMA
r/LocalLLaMA, r/MachineLearning

Trust check

high

Official HuggingFace release by Alibaba_Qwen. Apache license confirmed. Cross-verified by simonw benchmark and r/LocalLLaMA community testing.

Primary source ↗