Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.
Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.
| Qwen 3.5 397B A17B alibaba/qwen3.5-397b-a17b | Qwen1.5-14b-Chat-Awq qwen/qwen1.5-14b-chat-awq | |
|---|---|---|
| Provider | Alibaba | Alibaba Qwen |
| Family | Qwen | Qwen |
| Modality | text | text |
| Context window | 262,144 tok | 4,096 tok |
| Max output | 4,096 tok | 4,096 tok |
| Released | 2026-04-15 | 2024-02-05 |
| Input price | $0.600 /1M | $0.120 /1M |
| Output price | $3.60 /1M | $0.240 /1M |
| Cache read | — | — |
| Tools | yes | — |
| Streaming | yes | yes |
| Vision | — | — |
| JSON mode | yes | — |
| Reasoning | yes | — |
| Prompt caching | — | — |
Alibaba's Qwen 3.5 is a 397B-parameter mixture-of-experts model with 17B active parameters, offering strong reasoning capabilities with efficient inference.
Qwen1.5 is the improved version of Qwen, the large language model series developed by Alibaba Cloud. AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization.
# pip install aigateway-py openai
# aigateway-py: sub-accounts, evals, replays, jobs, webhook verify.
# openai SDK: chat/embeddings/images/audio — drop-in compat per our SDK's own guidance.
from openai import OpenAI
client = OpenAI(
base_url="https://api.aigateway.sh/v1",
api_key="sk-aig-...",
)
# Try Qwen 3.5 397B A17B
client.chat.completions.create(
model="alibaba/qwen3.5-397b-a17b",
messages=[{"role":"user","content":"hello"}],
)
# Try Qwen1.5-14b-Chat-Awq — same client, same key
client.chat.completions.create(
model="qwen/qwen1.5-14b-chat-awq",
messages=[{"role":"user","content":"hello"}],
)