compare/Qwen1.5-0.5b-ChatvsQwen1.5-1.8b-Chat

Qwen1.5-0.5b-Chat vs Qwen1.5-1.8b-Chat

Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.

RUN BOTH LIVE

Paste a prompt. Watch them race.

Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.

Sign in to runLive streaming uses your own key. It's free to sign up.
 Qwen1.5-0.5b-Chat
qwen/qwen1.5-0.5b-chat
Qwen1.5-1.8b-Chat
qwen/qwen1.5-1.8b-chat
ProviderAlibaba QwenAlibaba Qwen
FamilyQwenQwen
Modalitytexttext
Context window4,096 tok4,096 tok
Max output4,096 tok4,096 tok
Released2024-02-052024-02-05
Input price$0.010 /1M$0.020 /1M
Output price$0.020 /1M$0.040 /1M
Cache read
Tools
Streamingyesyes
Vision
JSON mode
Reasoning
Prompt caching
Qwen1.5-0.5b-Chat
qwen/qwen1.5-0.5b-chat
Full spec →

Qwen1.5 is the improved version of Qwen, the large language model series developed by Alibaba Cloud.

Strengths
  • General-purpose chat
  • Long context
  • Tool use
Qwen1.5-1.8b-Chat
qwen/qwen1.5-1.8b-chat
Full spec →

Qwen1.5 is the improved version of Qwen, the large language model series developed by Alibaba Cloud.

Strengths
  • General-purpose chat
  • Long context
  • Tool use
SWITCH BETWEEN THEM

One key, both models, one line different.

# pip install aigateway-py openai
# aigateway-py: sub-accounts, evals, replays, jobs, webhook verify.
# openai SDK: chat/embeddings/images/audio — drop-in compat per our SDK's own guidance.
from openai import OpenAI

client = OpenAI(
    base_url="https://api.aigateway.sh/v1",
    api_key="sk-aig-...",
)

# Try Qwen1.5-0.5b-Chat
client.chat.completions.create(
    model="qwen/qwen1.5-0.5b-chat",
    messages=[{"role":"user","content":"hello"}],
)

# Try Qwen1.5-1.8b-Chat — same client, same key
client.chat.completions.create(
    model="qwen/qwen1.5-1.8b-chat",
    messages=[{"role":"user","content":"hello"}],
)
Get an AIgateway keyAdd a third model

Compare with another

Qwen1.5-0.5b-Chat vs Uform-Gen2-Qwen-500m
qwen/qwen1.5-0.5b-chat · unum/uform-gen2-qwen-500m
Qwen1.5-1.8b-Chat vs Uform-Gen2-Qwen-500m
qwen/qwen1.5-1.8b-chat · unum/uform-gen2-qwen-500m
Qwen1.5-0.5b-Chat vs Qwen3-Embedding-0.6b
qwen/qwen1.5-0.5b-chat · qwen/qwen3-embedding-0.6b
Qwen1.5-1.8b-Chat vs Qwen3-Embedding-0.6b
qwen/qwen1.5-1.8b-chat · qwen/qwen3-embedding-0.6b
Deepseek-R1-Distill-Qwen-32b vs Qwen1.5-0.5b-Chat
deepseek/deepseek-r1-distill-qwen-32b · qwen/qwen1.5-0.5b-chat
Deepseek-R1-Distill-Qwen-32b vs Qwen1.5-1.8b-Chat
deepseek/deepseek-r1-distill-qwen-32b · qwen/qwen1.5-1.8b-chat
Qwen1.5-0.5b-Chat vs Qwen1.5-14b-Chat-Awq
qwen/qwen1.5-0.5b-chat · qwen/qwen1.5-14b-chat-awq
Qwen1.5-0.5b-Chat vs Qwen1.5-7b-Chat-Awq
qwen/qwen1.5-0.5b-chat · qwen/qwen1.5-7b-chat-awq
Qwen2.5-Coder-32b-Instruct vs Qwen1.5-0.5b-Chat
qwen/qwen2.5-coder-32b-instruct · qwen/qwen1.5-0.5b-chat