Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.
Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.
| Llama-3.1-8b-Instruct-Fast meta/llama-3.1-8b-instruct-fast | Llama-3.3-70b-Instruct-Fp8-Fast meta/llama-3.3-70b-instruct-fp8-fast | |
|---|---|---|
| Provider | Meta | Meta |
| Family | Llama 3 | Llama 3 |
| Modality | text | text |
| Context window | 131,072 tok | 24,000 tok |
| Max output | 4,096 tok | 8,192 tok |
| Released | 2024-07-23 | 2024-12-06 |
| Input price | $0.050 /1M | $0.290 /1M |
| Output price | $0.100 /1M | $2.25 /1M |
| Cache read | — | — |
| Tools | — | yes |
| Streaming | yes | yes |
| Vision | — | — |
| JSON mode | — | yes |
| Reasoning | — | — |
| Prompt caching | — | — |
[Fast version] The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models. The Llama 3.1 instruction tuned text only models are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
Llama 3.3 70B quantized to fp8 precision, optimized to be faster.
# pip install aigateway-py openai
# aigateway-py: sub-accounts, evals, replays, jobs, webhook verify.
# openai SDK: chat/embeddings/images/audio — drop-in compat per our SDK's own guidance.
from openai import OpenAI
client = OpenAI(
base_url="https://api.aigateway.sh/v1",
api_key="sk-aig-...",
)
# Try Llama-3.1-8b-Instruct-Fast
client.chat.completions.create(
model="meta/llama-3.1-8b-instruct-fast",
messages=[{"role":"user","content":"hello"}],
)
# Try Llama-3.3-70b-Instruct-Fp8-Fast — same client, same key
client.chat.completions.create(
model="meta/llama-3.3-70b-instruct-fp8-fast",
messages=[{"role":"user","content":"hello"}],
)