Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.
Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.
| Mistral-7b-Instruct-V0.2 hf/mistral/mistral-7b-instruct-v0.2 | Voxtral Mini Transcribe Realtime mistral/voxtral-mini-transcribe-realtime-26-02 | |
|---|---|---|
| Provider | Hugging Face | Mistral |
| Family | Mistral | Mistral |
| Modality | text | audio-stt |
| Context window | 3,072 tok | — |
| Max output | 4,096 tok | — |
| Released | 2024-04-02 | 2026-02-01 |
| Input price | $0.050 /1M | $0.0080 /min |
| Output price | $0.100 /1M | — |
| Cache read | — | — |
| Tools | — | — |
| Streaming | yes | — |
| Vision | — | — |
| JSON mode | — | — |
| Reasoning | — | — |
| Prompt caching | — | — |
The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2. Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1: 32k context window (vs 8k context in v0.1), rope-theta = 1e6, and no Sliding-Window Attention.
Voxtral Mini Transcribe Realtime (Feb 2026) — streaming ASR.
# pip install aigateway-py openai
# aigateway-py: sub-accounts, evals, replays, jobs, webhook verify.
# openai SDK: chat/embeddings/images/audio — drop-in compat per our SDK's own guidance.
from openai import OpenAI
client = OpenAI(
base_url="https://api.aigateway.sh/v1",
api_key="sk-aig-...",
)
# Try Mistral-7b-Instruct-V0.2
client.chat.completions.create(
model="hf/mistral/mistral-7b-instruct-v0.2",
messages=[{"role":"user","content":"hello"}],
)
# Try Voxtral Mini Transcribe Realtime — same client, same key
client.chat.completions.create(
model="mistral/voxtral-mini-transcribe-realtime-26-02",
messages=[{"role":"user","content":"hello"}],
)