Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.
Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.
| Whisper openai/whisper | Whisper-Tiny-EN openai/whisper-tiny-en | |
|---|---|---|
| Provider | OpenAI | OpenAI |
| Family | Whisper | Whisper |
| Modality | audio-stt | audio-stt |
| Context window | — | — |
| Max output | — | — |
| Released | 2023-09-25 | 2024-04-22 |
| Input price | $0.0005 /min | $0.0005 /min |
| Output price | — | — |
| Cache read | — | — |
| Tools | — | — |
| Streaming | yes | yes |
| Vision | — | — |
| JSON mode | — | — |
| Reasoning | — | — |
| Prompt caching | — | — |
Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification.
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalize to many datasets and domains without the need for fine-tuning. This is the English-only version of the Whisper Tiny model which was trained on the task of speech recognition.
# pip install aigateway-py openai
# aigateway-py: sub-accounts, evals, replays, jobs, webhook verify.
# openai SDK: chat/embeddings/images/audio — drop-in compat per our SDK's own guidance.
from openai import OpenAI
client = OpenAI(
base_url="https://api.aigateway.sh/v1",
api_key="sk-aig-...",
)
# Try Whisper
client.chat.completions.create(
model="openai/whisper",
messages=[{"role":"user","content":"hello"}],
)
# Try Whisper-Tiny-EN — same client, same key
client.chat.completions.create(
model="openai/whisper-tiny-en",
messages=[{"role":"user","content":"hello"}],
)