Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.
Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.
| Claude Opus 4.7 anthropic/claude-opus-4.7 | Granite-4.0-H-Micro ibm-granite/granite-4.0-h-micro | |
|---|---|---|
| Provider | Anthropic | IBM |
| Family | Claude 4 | Granite |
| Modality | text | text |
| Context window | 1,000,000 tok | 131,000 tok |
| Max output | 128,000 tok | 4,096 tok |
| Released | 2026-04-16 | 2025-10-07 |
| Input price | $5.00 /1M | $0.017 /1M |
| Output price | $25.00 /1M | $0.110 /1M |
| Cache read | $0.500 /1M | — |
| Tools | yes | yes |
| Streaming | yes | yes |
| Vision | yes | — |
| JSON mode | yes | yes |
| Reasoning | yes | — |
| Prompt caching | yes | — |
Claude Opus 4.7 is Anthropic's most capable generally available model to date. It is highly autonomous and performs exceptionally well on long-horizon agentic work, knowledge work, vision tasks, and memory tasks.
Granite 4.0 instruct models deliver strong performance across benchmarks, achieving industry-leading results in key agentic tasks like instruction following and function calling. These efficiencies make the models well-suited for a wide range of use cases like retrieval-augmented generation (RAG), multi-agent workflows, and edge deployments.
from openai import OpenAI
client = OpenAI(
base_url="https://api.aigateway.sh/v1",
api_key="sk-aig-...",
)
# Try Claude Opus 4.7
client.chat.completions.create(
model="anthropic/claude-opus-4.7",
messages=[{"role":"user","content":"hello"}],
)
# Try Granite-4.0-H-Micro — same client, same key
client.chat.completions.create(
model="ibm-granite/granite-4.0-h-micro",
messages=[{"role":"user","content":"hello"}],
)