integrations/chat ui/Open WebUI

Open WebUI + AIgateway

The 100k-star local ChatGPT — two env vars and you're on every AIgateway model.

Open WebUI is the most popular self-hosted LLM front-end (100k+ stars), commonly paired with Ollama. Because it also speaks the OpenAI protocol, AIgateway registers as a single OpenAI API connection and every model appears in the picker.

Open WebUI homepage →
Setup

Three steps or fewer.

STEP 01

Docker — set the OpenAI base URL

Pass OPENAI_API_BASE_URL and OPENAI_API_KEY. Open WebUI will treat AIgateway as the connection for everything under the OpenAI tab.

docker run -d -p 3000:8080 \
  -e OPENAI_API_BASE_URL="https://api.aigateway.sh/v1" \
  -e OPENAI_API_KEY="sk-aig-..." \
  -v open-webui:/app/backend/data \
  --name open-webui \
  ghcr.io/open-webui/open-webui:main
STEP 02

Or configure in Admin Panel → Connections

If you've already deployed Open WebUI, sign in as admin → Settings → Connections → OpenAI API. URL: https://api.aigateway.sh/v1 · Key: your AIgateway key. Save and the model list populates from /v1/models.

STEP 03

Combine with Ollama (optional)

Open WebUI can run local Ollama models AND AIgateway side-by-side. Local inference for privacy-sensitive prompts; AIgateway for frontier models. The UI lets users pick per-thread.

docker run -d -p 3000:8080 \
  -e OPENAI_API_BASE_URL="https://api.aigateway.sh/v1" \
  -e OPENAI_API_KEY="sk-aig-..." \
  -e OLLAMA_BASE_URL="http://host.docker.internal:11434" \
  --name open-webui ghcr.io/open-webui/open-webui:main
Notes
  • Open WebUI supports multiple OpenAI connections — you can point one at AIgateway and another at Azure/OpenAI directly, and users pick per thread.
  • The /v1/models endpoint returns the full AIgateway catalog (100+ entries) so every model appears in the picker.
  • For teams, mint sub-account keys from AIgateway and paste each per-user key into their Open WebUI profile for per-seat metering.
More integrations

Same key. Every other tool.