LlamaIndex's OpenAI integration accepts a custom api_base. Use it to route every embedding, completion, and chat call through AIgateway.
Use the OpenAI LLM class with the AIgateway base.
from llama_index.llms.openai import OpenAI as LlamaOpenAI
llm = LlamaOpenAI(
model="anthropic/claude-sonnet-4.6",
api_base="https://api.aigateway.sh/v1",
api_key="sk-aig-...",
)
resp = llm.complete("hello")Same pattern — OpenAIEmbedding with api_base set, then any embedding model from our catalog (e.g. baai/bge-m3).