Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.
Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.
| Claude Opus 4.7 anthropic/claude-opus-4.7 | Llama-2-7b-Chat-HF-Lora meta-llama/llama-2-7b-chat-hf-lora | |
|---|---|---|
| Provider | Anthropic | Meta |
| Family | Claude 4 | Llama 2 |
| Modality | text | text |
| Context window | 1,000,000 tok | 8,192 tok |
| Max output | 128,000 tok | 4,096 tok |
| Released | 2026-04-16 | 2024-04-02 |
| Input price | $5.00 /1M | $0.040 /1M |
| Output price | $25.00 /1M | $0.080 /1M |
| Cache read | $0.500 /1M | — |
| Tools | yes | — |
| Streaming | yes | yes |
| Vision | yes | — |
| JSON mode | yes | — |
| Reasoning | yes | — |
| Prompt caching | yes | — |
Claude Opus 4.7 is Anthropic's most capable generally available model to date. It is highly autonomous and performs exceptionally well on long-horizon agentic work, knowledge work, vision tasks, and memory tasks.
This is a Llama2 base model that Cloudflare dedicated for inference with LoRA adapters. Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format.
from openai import OpenAI
client = OpenAI(
base_url="https://api.aigateway.sh/v1",
api_key="sk-aig-...",
)
# Try Claude Opus 4.7
client.chat.completions.create(
model="anthropic/claude-opus-4.7",
messages=[{"role":"user","content":"hello"}],
)
# Try Llama-2-7b-Chat-HF-Lora — same client, same key
client.chat.completions.create(
model="meta-llama/llama-2-7b-chat-hf-lora",
messages=[{"role":"user","content":"hello"}],
)