← Back to directory
Mistral AI · Released 2024-04
Mixtral 8x22B
MoE classic from 2024 — 8 experts of 22B, 39B active per token. Apache-2.0 makes it the go-to open MoE for production teams.
Apache-2.0Commercial use OKgeneral
Params (max)
141B
Variants
141B / 47B
Context window
64K tokens
MMLU
77.8
HumanEval
75.3
GSM8K
88.4
Min VRAM (fp16, smallest variant)
80GB
Smallest Q4 GGUF
~26GB
Languages supported
5
Pros
- ✓Truly free license
- ✓Sparse efficiency
- ✓Battle-tested
Cons
- ×Older — newer models are smarter
- ×MoE memory footprint big
Highlights
- ●Apache-2.0
- ●39B active params
- ●Mature in production
Where to download
Hugging Face: mistralai/Mixtral-8x22B-Instruct-v0.1
Or via Ollama (
ollama pull mixtral-8x22b) or LM Studio's in-app browser.Homepage: https://mistral.ai