Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.
Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.
| Qwen3-Embedding-0.6b qwen/qwen3-embedding-0.6b | Qwq-32b qwen/qwq-32b | |
|---|---|---|
| Provider | Alibaba Qwen | Alibaba Qwen |
| Family | Qwen | Qwen |
| Modality | embedding | reasoning |
| Context window | 8,192 tok | 24,000 tok |
| Max output | — | 4,096 tok |
| Released | 2025-06-18 | 2025-03-05 |
| Input price | $0.012 /1M | $0.200 /1M |
| Output price | — | $0.400 /1M |
| Cache read | — | — |
| Tools | — | — |
| Streaming | — | yes |
| Vision | — | — |
| JSON mode | — | yes |
| Reasoning | — | yes |
| Prompt caching | — | — |
The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks.
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.
# pip install aigateway-py openai
# aigateway-py: sub-accounts, evals, replays, jobs, webhook verify.
# openai SDK: chat/embeddings/images/audio — drop-in compat per our SDK's own guidance.
from openai import OpenAI
client = OpenAI(
base_url="https://api.aigateway.sh/v1",
api_key="sk-aig-...",
)
# Try Qwen3-Embedding-0.6b
client.chat.completions.create(
model="qwen/qwen3-embedding-0.6b",
messages=[{"role":"user","content":"hello"}],
)
# Try Qwq-32b — same client, same key
client.chat.completions.create(
model="qwen/qwq-32b",
messages=[{"role":"user","content":"hello"}],
)