DeepSeek: R1 Distill Qwen 14B

deepseek/deepseek-r1-distill-qwen-14b

Created Jan 29, 202564,000 context
$0.15/M input tokens$0.15/M output tokens

DeepSeek R1 Distill Qwen 14B is a distilled large language model based on Qwen 2.5 14B, using outputs from DeepSeek R1. It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.

Other benchmark results include:

  • AIME 2024 pass@1: 69.7
  • MATH-500 pass@1: 93.9
  • CodeForces Rating: 1481

The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

Providers for R1 Distill Qwen 14B

OpenRouter routes requests to the best providers that are able to handle your prompt size and parameters, with fallbacks to maximize uptime.

    DeepSeek: R1 Distill Qwen 14B – Provider Status | OpenRouter