compare/Gemma-Sea-Lion-V4-27b-ITvsGemma-7b-IT

Gemma-Sea-Lion-V4-27b-IT vs Gemma-7b-IT

Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.

RUN BOTH LIVE

Paste a prompt. Watch them race.

Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.

Sign in to runLive streaming uses your own key. It's free to sign up.
 Gemma-Sea-Lion-V4-27b-IT
aisingapore/gemma-sea-lion-v4-27b-it
Gemma-7b-IT
hf/google/gemma-7b-it
ProviderAI SingaporeHugging Face
FamilyGemmaGemma
Modalitytexttext
Context window128,000 tok8,192 tok
Max output4,096 tok4,096 tok
Released2025-09-232024-04-01
Input price$0.350 /1M$0.080 /1M
Output price$0.560 /1M$0.160 /1M
Cache read
Tools
Streamingyesyes
Vision
JSON mode
Reasoning
Prompt caching
Gemma-Sea-Lion-V4-27b-IT
aisingapore/gemma-sea-lion-v4-27b-it
Full spec →

SEA-LION stands for Southeast Asian Languages In One Network, which is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region.

Strengths
  • General-purpose chat
  • Open-weight
Gemma-7b-IT
hf/google/gemma-7b-it
Full spec →

Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants.

Strengths
  • General-purpose chat
  • Open-weight
SWITCH BETWEEN THEM

One key, both models, one line different.

# pip install aigateway-py openai
# aigateway-py: sub-accounts, evals, replays, jobs, webhook verify.
# openai SDK: chat/embeddings/images/audio — drop-in compat per our SDK's own guidance.
from openai import OpenAI

client = OpenAI(
    base_url="https://api.aigateway.sh/v1",
    api_key="sk-aig-...",
)

# Try Gemma-Sea-Lion-V4-27b-IT
client.chat.completions.create(
    model="aisingapore/gemma-sea-lion-v4-27b-it",
    messages=[{"role":"user","content":"hello"}],
)

# Try Gemma-7b-IT — same client, same key
client.chat.completions.create(
    model="hf/google/gemma-7b-it",
    messages=[{"role":"user","content":"hello"}],
)
Get an AIgateway keyAdd a third model

Compare with another

Gemma-Sea-Lion-V4-27b-IT vs Embeddinggemma-300m
aisingapore/gemma-sea-lion-v4-27b-it · google/embeddinggemma-300m
Embeddinggemma-300m vs Gemma-7b-IT
google/embeddinggemma-300m · hf/google/gemma-7b-it
Gemma-Sea-Lion-V4-27b-IT vs Gemma-2b-IT-Lora
aisingapore/gemma-sea-lion-v4-27b-it · google/gemma-2b-it-lora
Gemma-Sea-Lion-V4-27b-IT vs Gemma-3-12b-IT
aisingapore/gemma-sea-lion-v4-27b-it · google/gemma-3-12b-it
Gemma-Sea-Lion-V4-27b-IT vs Gemma-4-26b-A4b-IT
aisingapore/gemma-sea-lion-v4-27b-it · google/gemma-4-26b-a4b-it
Gemma-Sea-Lion-V4-27b-IT vs Gemma-7b-IT-Lora
aisingapore/gemma-sea-lion-v4-27b-it · google/gemma-7b-it-lora
Gemma-2b-IT-Lora vs Gemma-7b-IT
google/gemma-2b-it-lora · hf/google/gemma-7b-it
Gemma-3-12b-IT vs Gemma-7b-IT
google/gemma-3-12b-it · hf/google/gemma-7b-it
Gemma-4-26b-A4b-IT vs Gemma-7b-IT
google/gemma-4-26b-a4b-it · hf/google/gemma-7b-it