Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.
Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.
| Gemini 3 Flash google/gemini-3-flash | M2.7 minimax/m2.7 | |
|---|---|---|
| Provider | MiniMax | |
| Family | Gemini 3 | MiniMax M |
| Modality | text | text |
| Context window | 1,000,000 tok | 128,000 tok |
| Max output | 8,192 tok | 4,096 tok |
| Released | 2026-04-13 | 2026-04-13 |
| Input price | $0.500 /1M | $0.300 /1M |
| Output price | $3.00 /1M | $1.20 /1M |
| Cache read | $0.050 /1M | — |
| Tools | yes | — |
| Streaming | yes | yes |
| Vision | yes | — |
| JSON mode | yes | — |
| Reasoning | — | — |
| Prompt caching | yes | — |
Gemini 3 Flash is Google's fast multimodal model with frontier intelligence, superior search, and grounding capabilities.
MiniMax's M2.7 language model with multilingual capabilities.
# pip install aigateway-py openai
# aigateway-py: sub-accounts, evals, replays, jobs, webhook verify.
# openai SDK: chat/embeddings/images/audio — drop-in compat per our SDK's own guidance.
from openai import OpenAI
client = OpenAI(
base_url="https://api.aigateway.sh/v1",
api_key="sk-aig-...",
)
# Try Gemini 3 Flash
client.chat.completions.create(
model="google/gemini-3-flash",
messages=[{"role":"user","content":"hello"}],
)
# Try M2.7 — same client, same key
client.chat.completions.create(
model="minimax/m2.7",
messages=[{"role":"user","content":"hello"}],
)