Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.
Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.
| Claude Opus 4.7 anthropic/claude-opus-4.7 | BGE-Reranker-Base baai/bge-reranker-base | |
|---|---|---|
| Provider | Anthropic | BAAI |
| Family | Claude 4 | BGE |
| Modality | text | reranking |
| Context window | 1,000,000 tok | — |
| Max output | 128,000 tok | — |
| Released | 2026-04-16 | 2025-02-14 |
| Input price | $5.00 /1M | $0.0031 /1M |
| Output price | $25.00 /1M | — |
| Cache read | $0.500 /1M | — |
| Tools | yes | — |
| Streaming | yes | — |
| Vision | yes | — |
| JSON mode | yes | — |
| Reasoning | yes | — |
| Prompt caching | yes | — |
Claude Opus 4.7 is Anthropic's most capable generally available model to date. It is highly autonomous and performs exceptionally well on long-horizon agentic work, knowledge work, vision tasks, and memory tasks.
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. You can get a relevance score by inputting query and passage to the reranker. And the score can be mapped to a float value in [0,1] by sigmoid function.
from openai import OpenAI
client = OpenAI(
base_url="https://api.aigateway.sh/v1",
api_key="sk-aig-...",
)
# Try Claude Opus 4.7
client.chat.completions.create(
model="anthropic/claude-opus-4.7",
messages=[{"role":"user","content":"hello"}],
)
# Try BGE-Reranker-Base — same client, same key
client.chat.completions.create(
model="baai/bge-reranker-base",
messages=[{"role":"user","content":"hello"}],
)