compare/Deepseek-R1-Distill-Qwen-32bvsQwen3-Embedding-0.6b

Deepseek-R1-Distill-Qwen-32b vs Qwen3-Embedding-0.6b

Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.

RUN BOTH LIVE

Paste a prompt. Watch them race.

Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.

Sign in to runLive streaming uses your own key. It's free to sign up.
 Deepseek-R1-Distill-Qwen-32b
deepseek/deepseek-r1-distill-qwen-32b
Qwen3-Embedding-0.6b
qwen/qwen3-embedding-0.6b
ProviderDeepSeekAlibaba Qwen
FamilyQwenQwen
Modalityreasoningembedding
Context window80,000 tok8,192 tok
Max output4,096 tok
Released2025-01-222025-06-18
Input price$0.500 /1M$0.012 /1M
Output price$4.88 /1M
Cache read
Tools
Streamingyes
Vision
JSON modeyes
Reasoningyes
Prompt caching
Deepseek-R1-Distill-Qwen-32b
deepseek/deepseek-r1-distill-qwen-32b
Full spec →

DeepSeek-R1-Distill-Qwen-32B is a model distilled from DeepSeek-R1 based on Qwen2.5. It outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.

Strengths
  • Strong on math + code
  • R1 reasoning in a 32B Qwen shell
  • Open-weight
Qwen3-Embedding-0.6b
qwen/qwen3-embedding-0.6b
Full spec →

The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks.

Strengths
  • Semantic similarity
  • Vector search
SWITCH BETWEEN THEM

One key, both models, one line different.

# pip install aigateway-py openai
# aigateway-py: sub-accounts, evals, replays, jobs, webhook verify.
# openai SDK: chat/embeddings/images/audio — drop-in compat per our SDK's own guidance.
from openai import OpenAI

client = OpenAI(
    base_url="https://api.aigateway.sh/v1",
    api_key="sk-aig-...",
)

# Try Deepseek-R1-Distill-Qwen-32b
client.chat.completions.create(
    model="deepseek/deepseek-r1-distill-qwen-32b",
    messages=[{"role":"user","content":"hello"}],
)

# Try Qwen3-Embedding-0.6b — same client, same key
client.chat.completions.create(
    model="qwen/qwen3-embedding-0.6b",
    messages=[{"role":"user","content":"hello"}],
)
Get an AIgateway keyAdd a third model

Compare with another

Qwen3-Embedding-0.6b vs Uform-Gen2-Qwen-500m
qwen/qwen3-embedding-0.6b · unum/uform-gen2-qwen-500m
Deepseek-R1-Distill-Qwen-32b vs Uform-Gen2-Qwen-500m
deepseek/deepseek-r1-distill-qwen-32b · unum/uform-gen2-qwen-500m
Qwen1.5-0.5b-Chat vs Qwen3-Embedding-0.6b
qwen/qwen1.5-0.5b-chat · qwen/qwen3-embedding-0.6b
Qwen1.5-1.8b-Chat vs Qwen3-Embedding-0.6b
qwen/qwen1.5-1.8b-chat · qwen/qwen3-embedding-0.6b
Qwen1.5-14b-Chat-Awq vs Qwen3-Embedding-0.6b
qwen/qwen1.5-14b-chat-awq · qwen/qwen3-embedding-0.6b
Qwen1.5-7b-Chat-Awq vs Qwen3-Embedding-0.6b
qwen/qwen1.5-7b-chat-awq · qwen/qwen3-embedding-0.6b
Qwen2.5-Coder-32b-Instruct vs Qwen3-Embedding-0.6b
qwen/qwen2.5-coder-32b-instruct · qwen/qwen3-embedding-0.6b
Qwen3-30b-A3b-Fp8 vs Qwen3-Embedding-0.6b
qwen/qwen3-30b-a3b-fp8 · qwen/qwen3-embedding-0.6b
Qwen3-Embedding-0.6b vs Qwq-32b
qwen/qwen3-embedding-0.6b · qwen/qwq-32b