compare/Mistral-7b-Instruct-V0.2vsVoxtral TTS

Mistral-7b-Instruct-V0.2 vs Voxtral TTS

Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.

RUN BOTH LIVE

Paste a prompt. Watch them race.

Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.

Sign in to runLive streaming uses your own key. It's free to sign up.
 Mistral-7b-Instruct-V0.2
hf/mistral/mistral-7b-instruct-v0.2
Voxtral TTS
mistral/voxtral-tts-26-03
ProviderHugging FaceMistral
FamilyMistralMistral
Modalitytextaudio-tts
Context window3,072 tok
Max output4,096 tok
Released2024-04-022026-03-26
Input price$0.050 /1M$12.00 /1K ch
Output price$0.100 /1M
Cache read
Tools
Streamingyes
Vision
JSON mode
Reasoning
Prompt caching
Mistral-7b-Instruct-V0.2
hf/mistral/mistral-7b-instruct-v0.2
Full spec →

The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2. Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1: 32k context window (vs 8k context in v0.1), rope-theta = 1e6, and no Sliding-Window Attention.

Strengths
  • General-purpose chat
  • Open-weight
Voxtral TTS
mistral/voxtral-tts-26-03
Full spec →

Voxtral TTS (Mar 2026) — Mistral multilingual speech synthesis.

Strengths
  • Natural speech synthesis
SWITCH BETWEEN THEM

One key, both models, one line different.

# pip install aigateway-py openai
# aigateway-py: sub-accounts, evals, replays, jobs, webhook verify.
# openai SDK: chat/embeddings/images/audio — drop-in compat per our SDK's own guidance.
from openai import OpenAI

client = OpenAI(
    base_url="https://api.aigateway.sh/v1",
    api_key="sk-aig-...",
)

# Try Mistral-7b-Instruct-V0.2
client.chat.completions.create(
    model="hf/mistral/mistral-7b-instruct-v0.2",
    messages=[{"role":"user","content":"hello"}],
)

# Try Voxtral TTS — same client, same key
client.chat.completions.create(
    model="mistral/voxtral-tts-26-03",
    messages=[{"role":"user","content":"hello"}],
)
Get an AIgateway keyAdd a third model

Compare with another

Mistral-7b-Instruct-V0.2 vs Hermes-2-Pro-Mistral-7b
hf/mistral/mistral-7b-instruct-v0.2 · hf/nousresearch/hermes-2-pro-mistral-7b
Mistral-7b-Instruct-V0.2 vs Mistral-7b-Instruct-V0.1-Awq
hf/mistral/mistral-7b-instruct-v0.2 · hf/thebloke/mistral-7b-instruct-v0.1-awq
Mistral-7b-Instruct-V0.2 vs Openhermes-2.5-Mistral-7b-Awq
hf/mistral/mistral-7b-instruct-v0.2 · hf/thebloke/openhermes-2.5-mistral-7b-awq
Mistral-7b-Instruct-V0.2 vs Voxtral Mini Transcribe Realtime
hf/mistral/mistral-7b-instruct-v0.2 · mistral/voxtral-mini-transcribe-realtime-26-02
Mistral-7b-Instruct-V0.2 vs Voxtral Mini Transcribe
hf/mistral/mistral-7b-instruct-v0.2 · mistral/voxtral-mini-transcribe-26-02
Mistral-7b-Instruct-V0.2 vs Mistral Small 4
hf/mistral/mistral-7b-instruct-v0.2 · mistral/mistral-small-4-0-26-03
Hermes-2-Pro-Mistral-7b vs Voxtral TTS
hf/nousresearch/hermes-2-pro-mistral-7b · mistral/voxtral-tts-26-03
Mistral-7b-Instruct-V0.1-Awq vs Voxtral TTS
hf/thebloke/mistral-7b-instruct-v0.1-awq · mistral/voxtral-tts-26-03
Openhermes-2.5-Mistral-7b-Awq vs Voxtral TTS
hf/thebloke/openhermes-2.5-mistral-7b-awq · mistral/voxtral-tts-26-03