EmbeddingGemma is a 300M parameter, state-of-the-art for its size, open embedding model from Google, built from Gemma 3 (with T5Gemma initialization) and the same research and technology used to create Gemini models. EmbeddingGemma produces vector representations of text, making it well-suited for search and retrieval tasks, including classification, clustering, and semantic similarity search. This model was trained with data in 100+ spoken languages.
curl https://api.aigateway.sh/v1/embeddings \
-H "Authorization: Bearer $AIGATEWAY_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model":"google/embeddinggemma-300m","input":"hello"}'{
"model": "google/embeddinggemma-300m",
"input": "Text to embed, or an array of strings for batch."
}{
"object": "list",
"data": [
{
"object": "embedding",
"index": 0,
"embedding": [0.0123, -0.0456, 0.0789, /* ... */]
}
],
"model": "google/embeddinggemma-300m",
"usage": { "prompt_tokens": 5, "total_tokens": 5 }
}from openai import OpenAI client = OpenAI(base_url="https://api.aigateway.sh/v1", api_key="sk-aig-...") r = client.embeddings.create(model="google/embeddinggemma-300m", input="hello world") print(r.data[0].embedding[:5])