Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.
Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.
| Aura-2-EN deepgram/aura-2-en | Qwen2.5-Coder-32b-Instruct qwen/qwen2.5-coder-32b-instruct | |
|---|---|---|
| Provider | Deepgram | Alibaba Qwen |
| Family | Aura | Qwen |
| Modality | audio-tts | text |
| Context window | — | 32,768 tok |
| Max output | — | 4,096 tok |
| Released | 2025-10-09 | 2025-02-27 |
| Input price | $0.030 /1K ch | $0.660 /1M |
| Output price | — | $1.00 /1M |
| Cache read | — | — |
| Tools | — | yes |
| Streaming | yes | yes |
| Vision | — | — |
| JSON mode | — | yes |
| Reasoning | — | — |
| Prompt caching | — | — |
Aura-2 is a context-aware text-to-speech (TTS) model that applies natural pacing, expressiveness, and fillers based on the context of the provided text. The quality of your text input directly impacts the naturalness of the audio output.
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
# pip install aigateway-py openai
# aigateway-py: sub-accounts, evals, replays, jobs, webhook verify.
# openai SDK: chat/embeddings/images/audio — drop-in compat per our SDK's own guidance.
from openai import OpenAI
client = OpenAI(
base_url="https://api.aigateway.sh/v1",
api_key="sk-aig-...",
)
# Try Aura-2-EN
client.chat.completions.create(
model="deepgram/aura-2-en",
messages=[{"role":"user","content":"hello"}],
)
# Try Qwen2.5-Coder-32b-Instruct — same client, same key
client.chat.completions.create(
model="qwen/qwen2.5-coder-32b-instruct",
messages=[{"role":"user","content":"hello"}],
)