← Back to directory
Microsoft · Released 2025-12

Phi-4

Microsoft's 14B reasoning model. Hits 70B-level math and reasoning scores from a 14B base — best parameter efficiency on the leaderboard.

MITCommercial use OKsmallreasoning
Params (max)
14B
Variants
14B
Context window
16K tokens
MMLU
84.8
HumanEval
82.6
GSM8K
95.2
Min VRAM (fp16, smallest variant)
8GB
Smallest Q4 GGUF
~4GB
Languages supported
5
Pros
  • Best params/perf ratio
  • MIT license
  • Runs on a single 4090
Cons
  • ×Short 16K context
  • ×English-heavy

Highlights

  • MIT license — fully open
  • GSM8K 95.2 from 14B params
  • Trained heavily on synthetic data

Where to download

Hugging Face: microsoft/phi-4
Or via Ollama (ollama pull phi-4) or LM Studio's in-app browser.

Related reading

Compare Phi-4 with