compare/Llama-3.1-8b-Instruct-FastvsLlama-3.3-70b-Instruct-Fp8-Fast

Llama-3.1-8b-Instruct-Fast vs Llama-3.3-70b-Instruct-Fp8-Fast

Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.

RUN BOTH LIVE

Paste a prompt. Watch them race.

Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.

Sign in to runLive streaming uses your own key. It's free to sign up.
 Llama-3.1-8b-Instruct-Fast
meta/llama-3.1-8b-instruct-fast
Llama-3.3-70b-Instruct-Fp8-Fast
meta/llama-3.3-70b-instruct-fp8-fast
ProviderMetaMeta
FamilyLlama 3Llama 3
Modalitytexttext
Context window131,072 tok24,000 tok
Max output4,096 tok8,192 tok
Released2024-07-232024-12-06
Input price$0.050 /1M$0.290 /1M
Output price$0.100 /1M$2.25 /1M
Cache read
Toolsyes
Streamingyesyes
Vision
JSON modeyes
Reasoning
Prompt caching
Llama-3.1-8b-Instruct-Fast
meta/llama-3.1-8b-instruct-fast
Full spec →

[Fast version] The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models. The Llama 3.1 instruction tuned text only models are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.

Strengths
  • General-purpose chat
  • Long context
  • Tool use
Llama-3.3-70b-Instruct-Fp8-Fast
meta/llama-3.3-70b-instruct-fp8-fast
Full spec →

Llama 3.3 70B quantized to fp8 precision, optimized to be faster.

Strengths
  • Strong general-purpose open model
  • FP8-fast variant
  • Open-weight license
SWITCH BETWEEN THEM

One key, both models, one line different.

# pip install aigateway-py openai
# aigateway-py: sub-accounts, evals, replays, jobs, webhook verify.
# openai SDK: chat/embeddings/images/audio — drop-in compat per our SDK's own guidance.
from openai import OpenAI

client = OpenAI(
    base_url="https://api.aigateway.sh/v1",
    api_key="sk-aig-...",
)

# Try Llama-3.1-8b-Instruct-Fast
client.chat.completions.create(
    model="meta/llama-3.1-8b-instruct-fast",
    messages=[{"role":"user","content":"hello"}],
)

# Try Llama-3.3-70b-Instruct-Fp8-Fast — same client, same key
client.chat.completions.create(
    model="meta/llama-3.3-70b-instruct-fp8-fast",
    messages=[{"role":"user","content":"hello"}],
)
Get an AIgateway keyAdd a third model

Compare with another

Llama-3-8b-Instruct vs Llama-3.1-8b-Instruct-Fast
meta/llama-3-8b-instruct · meta/llama-3.1-8b-instruct-fast
Llama-3-8b-Instruct vs Llama-3.3-70b-Instruct-Fp8-Fast
meta/llama-3-8b-instruct · meta/llama-3.3-70b-instruct-fp8-fast
Llama-3-8b-Instruct-Awq vs Llama-3.1-8b-Instruct-Fast
meta/llama-3-8b-instruct-awq · meta/llama-3.1-8b-instruct-fast
Llama-3-8b-Instruct-Awq vs Llama-3.3-70b-Instruct-Fp8-Fast
meta/llama-3-8b-instruct-awq · meta/llama-3.3-70b-instruct-fp8-fast
Llama-3.1-70b-Instruct vs Llama-3.1-8b-Instruct-Fast
meta/llama-3.1-70b-instruct · meta/llama-3.1-8b-instruct-fast
Llama-3.1-70b-Instruct vs Llama-3.3-70b-Instruct-Fp8-Fast
meta/llama-3.1-70b-instruct · meta/llama-3.3-70b-instruct-fp8-fast
Llama-3.1-8b-Instruct vs Llama-3.1-8b-Instruct-Fast
meta/llama-3.1-8b-instruct · meta/llama-3.1-8b-instruct-fast
Llama-3.1-8b-Instruct vs Llama-3.3-70b-Instruct-Fp8-Fast
meta/llama-3.1-8b-instruct · meta/llama-3.3-70b-instruct-fp8-fast
Llama-3.1-8b-Instruct-Awq vs Llama-3.1-8b-Instruct-Fast
meta/llama-3.1-8b-instruct-awq · meta/llama-3.1-8b-instruct-fast