Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.
Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.
| Llama-3-8b-Instruct-Awq meta/llama-3-8b-instruct-awq | Llama-3.2-11b-Vision-Instruct meta/llama-3.2-11b-vision-instruct | |
|---|---|---|
| Provider | Meta | Meta |
| Family | Llama 3 | Llama 3 |
| Modality | text | text |
| Context window | 8,192 tok | 128,000 tok |
| Max output | 4,096 tok | 4,096 tok |
| Released | 2024-05-09 | 2024-09-25 |
| Input price | $0.120 /1M | $0.049 /1M |
| Output price | $0.270 /1M | $0.680 /1M |
| Cache read | — | — |
| Tools | — | — |
| Streaming | yes | yes |
| Vision | — | yes |
| JSON mode | — | — |
| Reasoning | — | — |
| Prompt caching | — | — |
Quantized (int4) generative text model with 8 billion parameters from Meta.
The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image.
# pip install aigateway-py openai
# aigateway-py: sub-accounts, evals, replays, jobs, webhook verify.
# openai SDK: chat/embeddings/images/audio — drop-in compat per our SDK's own guidance.
from openai import OpenAI
client = OpenAI(
base_url="https://api.aigateway.sh/v1",
api_key="sk-aig-...",
)
# Try Llama-3-8b-Instruct-Awq
client.chat.completions.create(
model="meta/llama-3-8b-instruct-awq",
messages=[{"role":"user","content":"hello"}],
)
# Try Llama-3.2-11b-Vision-Instruct — same client, same key
client.chat.completions.create(
model="meta/llama-3.2-11b-vision-instruct",
messages=[{"role":"user","content":"hello"}],
)