compare/Llama-4-Scout-17b-16e-InstructvsGpt-Oss-120b

Llama-4-Scout-17b-16e-Instruct vs Gpt-Oss-120b

Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.

RUN BOTH LIVE

Paste a prompt. Watch them race.

Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.

Sign in to runLive streaming uses your own key. It's free to sign up.
 Llama-4-Scout-17b-16e-Instruct
meta/llama-4-scout-17b-16e-instruct
Gpt-Oss-120b
openai/gpt-oss-120b
ProviderMetaOpenAI
FamilyLlama 4GPT OSS
Modalitytexttext
Context window131,000 tok128,000 tok
Max output4,096 tok4,096 tok
Released2025-04-052025-08-05
Input price$0.270 /1M$0.350 /1M
Output price$0.850 /1M$0.750 /1M
Cache read
Toolsyesyes
Streamingyesyes
Visionyes
JSON modeyesyes
Reasoningyes
Prompt caching
Llama-4-Scout-17b-16e-Instruct
meta/llama-4-scout-17b-16e-instruct
Full spec →

Meta's Llama 4 Scout is a 17 billion parameter model with 16 experts that is natively multimodal. These models leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding.

Strengths
  • MoE (17B active / ~100B total)
  • Strong multi-lingual
  • Open-weight license
Gpt-Oss-120b
openai/gpt-oss-120b
Full spec →

OpenAI’s open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases – gpt-oss-120b is for production, general purpose, high reasoning use-cases.

Strengths
  • OpenAI's first open-weight release
  • Strong instruction following
  • Apache-2.0 license
SWITCH BETWEEN THEM

One key, both models, one line different.

# pip install aigateway-py openai
# aigateway-py: sub-accounts, evals, replays, jobs, webhook verify.
# openai SDK: chat/embeddings/images/audio — drop-in compat per our SDK's own guidance.
from openai import OpenAI

client = OpenAI(
    base_url="https://api.aigateway.sh/v1",
    api_key="sk-aig-...",
)

# Try Llama-4-Scout-17b-16e-Instruct
client.chat.completions.create(
    model="meta/llama-4-scout-17b-16e-instruct",
    messages=[{"role":"user","content":"hello"}],
)

# Try Gpt-Oss-120b — same client, same key
client.chat.completions.create(
    model="openai/gpt-oss-120b",
    messages=[{"role":"user","content":"hello"}],
)
Get an AIgateway keyAdd a third model

Compare with another

Kimi K2.6 vs Llama-4-Scout-17b-16e-Instruct
moonshot/kimi-k2.6 · meta/llama-4-scout-17b-16e-instruct
Claude Sonnet 4.6 vs Llama-4-Scout-17b-16e-Instruct
anthropic/claude-sonnet-4.6 · meta/llama-4-scout-17b-16e-instruct
Claude Opus 4.7 vs Llama-4-Scout-17b-16e-Instruct
anthropic/claude-opus-4.7 · meta/llama-4-scout-17b-16e-instruct
Claude Opus 4.7 vs Gpt-Oss-120b
anthropic/claude-opus-4.7 · openai/gpt-oss-120b
GPT-5.4 vs Llama-4-Scout-17b-16e-Instruct
openai/gpt-5.4 · meta/llama-4-scout-17b-16e-instruct
GPT-5.4 vs Gpt-Oss-120b
openai/gpt-5.4 · openai/gpt-oss-120b
Gemini 3.1 Pro vs Llama-4-Scout-17b-16e-Instruct
google/gemini-3.1-pro · meta/llama-4-scout-17b-16e-instruct
Gemini 3.1 Pro vs Gpt-Oss-120b
google/gemini-3.1-pro · openai/gpt-oss-120b
Kimi K2.6 vs Gpt-Oss-120b
moonshot/kimi-k2.6 · openai/gpt-oss-120b