compare/Embeddinggemma-300mvsGemma-7b-IT-Lora

Embeddinggemma-300m vs Gemma-7b-IT-Lora

Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.

RUN BOTH LIVE

Paste a prompt. Watch them race.

Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.

Sign in to runLive streaming uses your own key. It's free to sign up.
 Embeddinggemma-300m
google/embeddinggemma-300m
Gemma-7b-IT-Lora
google/gemma-7b-it-lora
ProviderGoogleGoogle
FamilyGemmaGemma
Modalityembeddingtext
Context window3,500 tok
Max output4,096 tok
Released2025-09-042024-04-02
Input price$0.020 /1M$0.080 /1M
Output price$0.160 /1M
Cache read
Tools
Streamingyes
Visionyes
JSON mode
Reasoning
Prompt caching
Embeddinggemma-300m
google/embeddinggemma-300m
Full spec →

EmbeddingGemma is a 300M parameter, state-of-the-art for its size, open embedding model from Google, built from Gemma 3 (with T5Gemma initialization) and the same research and technology used to create Gemini models. EmbeddingGemma produces vector representations of text, making it well-suited for search and retrieval tasks, including classification, clustering, and semantic similarity search. This model was trained with data in 100+ spoken languages.

Strengths
  • Semantic similarity
  • Vector search
Gemma-7b-IT-Lora
google/gemma-7b-it-lora
Full spec →

This is a Gemma-7B base model that Cloudflare dedicates for inference with LoRA adapters. Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models.

Strengths
  • General-purpose chat
  • Open-weight
SWITCH BETWEEN THEM

One key, both models, one line different.

# pip install aigateway-py openai
# aigateway-py: sub-accounts, evals, replays, jobs, webhook verify.
# openai SDK: chat/embeddings/images/audio — drop-in compat per our SDK's own guidance.
from openai import OpenAI

client = OpenAI(
    base_url="https://api.aigateway.sh/v1",
    api_key="sk-aig-...",
)

# Try Embeddinggemma-300m
client.chat.completions.create(
    model="google/embeddinggemma-300m",
    messages=[{"role":"user","content":"hello"}],
)

# Try Gemma-7b-IT-Lora — same client, same key
client.chat.completions.create(
    model="google/gemma-7b-it-lora",
    messages=[{"role":"user","content":"hello"}],
)
Get an AIgateway keyAdd a third model

Compare with another

Gemma-Sea-Lion-V4-27b-IT vs Embeddinggemma-300m
aisingapore/gemma-sea-lion-v4-27b-it · google/embeddinggemma-300m
Embeddinggemma-300m vs Gemma-2b-IT-Lora
google/embeddinggemma-300m · google/gemma-2b-it-lora
Embeddinggemma-300m vs Gemma-3-12b-IT
google/embeddinggemma-300m · google/gemma-3-12b-it
Embeddinggemma-300m vs Gemma-4-26b-A4b-IT
google/embeddinggemma-300m · google/gemma-4-26b-a4b-it
Embeddinggemma-300m vs Gemma-7b-IT
google/embeddinggemma-300m · hf/google/gemma-7b-it
Gemma-Sea-Lion-V4-27b-IT vs Gemma-7b-IT-Lora
aisingapore/gemma-sea-lion-v4-27b-it · google/gemma-7b-it-lora
Gemma-2b-IT-Lora vs Gemma-7b-IT-Lora
google/gemma-2b-it-lora · google/gemma-7b-it-lora
Gemma-3-12b-IT vs Gemma-7b-IT-Lora
google/gemma-3-12b-it · google/gemma-7b-it-lora
Gemma-4-26b-A4b-IT vs Gemma-7b-IT-Lora
google/gemma-4-26b-a4b-it · google/gemma-7b-it-lora