Skip to main content

Documentation Index

Fetch the complete documentation index at: https://to11.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Providers

A provider is a named entry in [providers.*] that tells the gateway how to reach an LLM API. It declares the base URL, which models the API serves, how to authenticate, and how long to wait for a response. Every request that passes through the gateway ultimately resolves to a provider.

Definition

Each provider is a named TOML table under the [providers] section. The table key becomes the provider’s identifier and is referenced by targets, routes, and the model-lookup system.
[providers.openai]
base_url   = "https://api.openai.com/v1"
models     = ["gpt-4o", "gpt-4o-mini", "o3", "o4-mini"]
timeout_ms = 30000

[providers.anthropic]
base_url   = "https://api.anthropic.com/v1"
models     = ["claude-sonnet-4-6", "claude-opus-4-6"]
timeout_ms = 60000

Fields

FieldTypeRequiredDescription
base_urlstringYesThe provider’s API base URL.
modelsarrayYesModel names this provider handles. The gateway uses this list to resolve which provider serves a given model.
credentialstringNoCredential location (see below). When omitted, the gateway uses a convention-based default.
timeout_msintegerNoRequest timeout in milliseconds. Defaults to the value in [defaults.provider], or 30 000 ms if that is also absent.
auth_typestringNoAuthentication method: bearer (default), api_key_header, or query_param.
auth_header_namestringNoCustom header name when auth_type is api_key_header. Defaults to the provider’s standard header (e.g. api-key for Azure OpenAI).

Credential resolution

The gateway never stores raw API keys in configuration files. Instead, credentials are referenced by location using the env:: prefix. Three patterns exist:
  1. Convention default — When credential is omitted, the gateway derives the environment variable from the provider name. openai resolves to env::OPENAI_API_KEY, anthropic to env::ANTHROPIC_API_KEY, and so on for every known provider.
  2. Explicit env reference — Set credential = "env::MY_CUSTOM_KEY" to point at a specific environment variable.
  3. No credential — For local providers like Ollama and vLLM, set credential = "none". The gateway sends requests without an Authorization header.
[providers.openai]
base_url = "https://api.openai.com/v1"
# credential is omitted — resolves to env::OPENAI_API_KEY by convention
models   = ["gpt-4o"]

[providers.ollama]
base_url   = "http://localhost:11434/v1"
credential = "none"
models     = ["llama3.1", "mistral"]
In passthrough mode the gateway resolves the provider’s credential leniently — if the environment variable is missing, the credential resolves to an empty string and the caller’s own Authorization header is forwarded instead. Targets use strict resolution; see the Targets page.

Auth types

Most providers use the default bearer type, which sends Authorization: Bearer <key>. Two alternatives exist for providers with different conventions:
  • api_key_header — Sends the key via a provider-specific header (e.g. Azure OpenAI’s api-key header).
  • query_param — Appends the key as a URL query parameter (e.g. Google Gemini’s key= parameter).
[providers.azure-openai]
base_url  = "https://YOUR_RESOURCE.openai.azure.com/openai"
credential = "env::AZURE_OPENAI_API_KEY"
auth_type = "api_key_header"
models    = ["my-gpt4o-deployment"]

[providers.google-gemini]
base_url  = "https://generativelanguage.googleapis.com/v1beta/openai"
credential = "env::GEMINI_API_KEY"
auth_type = "query_param"
models    = ["gemini-2.5-pro-preview-06-05"]

Timeout inheritance

When the gateway needs to determine the timeout for a request, it checks four levels in order:
model override ([models.o3])
  --> provider timeout ([providers.openai] timeout_ms)
    --> defaults ([defaults.provider] timeout_ms)
      --> hardcoded 30 000 ms
The first value found wins. This lets you set a generous timeout for a specific reasoning model without changing the provider-wide or global default.

How providers relate to routing

Providers serve as the model registry — the foundation that all routing layers build upon.
  • Passthrough routing (L1) — The gateway looks up which provider registered a model and forwards the request with the caller’s own API key. No targets or routes are needed.
  • Managed routing (L2) — Targets reference models that must exist in a provider’s models list. Routes then group targets into strategies. The provider supplies the base URL and default credential.
  • Function routing (L3) — Functions reference either targets or model names. Model names are still resolved through the provider registry.
A model that does not appear in any provider’s models list is unknown to the gateway and results in a 404 error.

Next steps

Models

Per-model overrides for timeouts and other settings.

Targets

Pairing models with gateway-owned credentials.

Configuration Reference

Full TOML configuration reference.