compare/Hermes-2-Pro-Mistral-7bvsMistral-7b-Instruct-V0.1-Awq

Hermes-2-Pro-Mistral-7b vs Mistral-7b-Instruct-V0.1-Awq

Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.

RUN BOTH LIVE

Paste a prompt. Watch them race.

Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.

Sign in to runLive streaming uses your own key. It's free to sign up.
 Hermes-2-Pro-Mistral-7b
hf/nousresearch/hermes-2-pro-mistral-7b
Mistral-7b-Instruct-V0.1-Awq
hf/thebloke/mistral-7b-instruct-v0.1-awq
ProviderHugging FaceHugging Face
FamilyMistralMistral
Modalitytexttext
Context window24,000 tok4,096 tok
Max output4,096 tok4,096 tok
Released2024-04-012023-09-27
Input price$0.050 /1M$0.050 /1M
Output price$0.100 /1M$0.100 /1M
Cache read
Toolsyes
Streamingyesyes
Vision
JSON modeyes
Reasoning
Prompt caching
Hermes-2-Pro-Mistral-7b
hf/nousresearch/hermes-2-pro-mistral-7b
Full spec →

Hermes 2 Pro on Mistral 7B is the new flagship 7B Hermes! Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.

Strengths
  • Function / tool calling
  • Structured output
Mistral-7b-Instruct-V0.1-Awq
hf/thebloke/mistral-7b-instruct-v0.1-awq
Full spec →

Mistral 7B Instruct v0.1 AWQ is an efficient, accurate and blazing-fast low-bit weight quantized Mistral variant.

Strengths
  • General-purpose chat
  • Long context
  • Tool use
SWITCH BETWEEN THEM

One key, both models, one line different.

# pip install aigateway-py openai
# aigateway-py: sub-accounts, evals, replays, jobs, webhook verify.
# openai SDK: chat/embeddings/images/audio — drop-in compat per our SDK's own guidance.
from openai import OpenAI

client = OpenAI(
    base_url="https://api.aigateway.sh/v1",
    api_key="sk-aig-...",
)

# Try Hermes-2-Pro-Mistral-7b
client.chat.completions.create(
    model="hf/nousresearch/hermes-2-pro-mistral-7b",
    messages=[{"role":"user","content":"hello"}],
)

# Try Mistral-7b-Instruct-V0.1-Awq — same client, same key
client.chat.completions.create(
    model="hf/thebloke/mistral-7b-instruct-v0.1-awq",
    messages=[{"role":"user","content":"hello"}],
)
Get an AIgateway keyAdd a third model

Compare with another

Mistral-7b-Instruct-V0.2 vs Hermes-2-Pro-Mistral-7b
hf/mistral/mistral-7b-instruct-v0.2 · hf/nousresearch/hermes-2-pro-mistral-7b
Mistral-7b-Instruct-V0.2 vs Mistral-7b-Instruct-V0.1-Awq
hf/mistral/mistral-7b-instruct-v0.2 · hf/thebloke/mistral-7b-instruct-v0.1-awq
Hermes-2-Pro-Mistral-7b vs Openhermes-2.5-Mistral-7b-Awq
hf/nousresearch/hermes-2-pro-mistral-7b · hf/thebloke/openhermes-2.5-mistral-7b-awq
Hermes-2-Pro-Mistral-7b vs Voxtral Mini Transcribe Realtime
hf/nousresearch/hermes-2-pro-mistral-7b · mistral/voxtral-mini-transcribe-realtime-26-02
Hermes-2-Pro-Mistral-7b vs Voxtral Mini Transcribe
hf/nousresearch/hermes-2-pro-mistral-7b · mistral/voxtral-mini-transcribe-26-02
Hermes-2-Pro-Mistral-7b vs Voxtral TTS
hf/nousresearch/hermes-2-pro-mistral-7b · mistral/voxtral-tts-26-03
Hermes-2-Pro-Mistral-7b vs Mistral Small 4
hf/nousresearch/hermes-2-pro-mistral-7b · mistral/mistral-small-4-0-26-03
Mistral-7b-Instruct-V0.1-Awq vs Openhermes-2.5-Mistral-7b-Awq
hf/thebloke/mistral-7b-instruct-v0.1-awq · hf/thebloke/openhermes-2.5-mistral-7b-awq
Mistral-7b-Instruct-V0.1-Awq vs Voxtral Mini Transcribe Realtime
hf/thebloke/mistral-7b-instruct-v0.1-awq · mistral/voxtral-mini-transcribe-realtime-26-02