Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.
Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.
| Kimi K2.6 moonshot/kimi-k2.6 | Llama-4-Scout-17b-16e-Instruct meta/llama-4-scout-17b-16e-instruct | |
|---|---|---|
| Provider | Moonshot | Meta |
| Family | Kimi | Llama 4 |
| Modality | text | text |
| Context window | 262,144 tok | 131,000 tok |
| Max output | 16,384 tok | 4,096 tok |
| Released | 2026-04-20 | 2025-04-05 |
| Input price | $0.950 /1M | $0.270 /1M |
| Output price | $4.00 /1M | $0.850 /1M |
| Cache read | $0.160 /1M | — |
| Tools | yes | yes |
| Streaming | yes | yes |
| Vision | yes | yes |
| JSON mode | yes | yes |
| Reasoning | yes | — |
| Prompt caching | yes | — |
Kimi K2.6 is a frontier-scale open-source 1T parameter model with a 262.1k context window, multi-turn tool calling, vision inputs, and structured outputs for agentic workloads.
Meta's Llama 4 Scout is a 17 billion parameter model with 16 experts that is natively multimodal. These models leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding.
from openai import OpenAI
client = OpenAI(
base_url="https://api.aigateway.sh/v1",
api_key="sk-aig-...",
)
# Try Kimi K2.6
client.chat.completions.create(
model="moonshot/kimi-k2.6",
messages=[{"role":"user","content":"hello"}],
)
# Try Llama-4-Scout-17b-16e-Instruct — same client, same key
client.chat.completions.create(
model="meta/llama-4-scout-17b-16e-instruct",
messages=[{"role":"user","content":"hello"}],
)