Skip to main content

Documentation Index

Fetch the complete documentation index at: https://to11.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Passthrough Routing

Passthrough is the gateway’s default routing mode. The gateway acts as a transparent proxy — it forwards your Authorization header directly to the upstream provider without reading or storing your API key.

When passthrough applies

A request uses passthrough when the requested model does not match any configured function ([functions.*]) or route ([routes.*]). The gateway looks up which provider registered that model name and forwards the request to that provider’s base_url. If the model does not match any provider either, the gateway returns a 404 UnknownModel error.

Provider model registration

Each provider declares its models list. The gateway uses this to resolve passthrough requests to the correct upstream.
[providers.openai]
base_url = "https://api.openai.com/v1"
models = ["gpt-4o", "gpt-4o-mini", "o3"]

[providers.anthropic]
base_url = "https://api.anthropic.com/v1"
models = ["claude-sonnet-4-6", "claude-haiku-4-5"]
With this configuration, a request for gpt-4o routes to OpenAI and a request for claude-sonnet-4-6 routes to Anthropic. Both require the caller to supply their own API key.

The simplest deployment mode

Passthrough requires only [providers.*] blocks — no targets, routes, or functions. This is the minimal viable gateway configuration:
[server]
host = "0.0.0.0"
port = 4000

[providers.openai]
base_url = "https://api.openai.com/v1"
models = ["gpt-4o", "gpt-4o-mini"]

[providers.anthropic]
base_url = "https://api.anthropic.com/v1"
models = ["claude-sonnet-4-6"]
Every model in the configuration uses passthrough. Callers must provide their own API key in the Authorization header (or equivalent provider-specific header). The gateway adds telemetry, security scanning, and cross-format translation without touching credentials.

Explicit provider prefix

You can force passthrough to a specific provider using the provider::model prefix syntax:
{ "model": "openai::gpt-4o", "messages": [...] }
This bypasses top-down resolution entirely. Even if a managed route or function is configured for gpt-4o, the prefix sends the request through passthrough to the OpenAI provider.

Credential defaults on providers

Providers can declare a credential field:
[providers.openai]
base_url   = "https://api.openai.com/v1"
credential = "env::OPENAI_API_KEY"
models     = ["gpt-4o", "gpt-4o-mini"]
This credential is used by managed routing targets that reference the provider — it is not used for passthrough. Passthrough always forwards the caller’s key. The provider-level credential serves as a default for [targets.*] entries that do not declare their own credential.
When credential is omitted, the gateway uses a convention-based default derived from the provider name (e.g. env::OPENAI_API_KEY for the openai provider). This convention applies only to managed routing targets, never to passthrough requests.

Coexistence with managed routing

Passthrough continues to work for any model that does not have a matching route or function. A single configuration can serve both managed and passthrough models simultaneously:
[providers.openai]
base_url   = "https://api.openai.com/v1"
credential = "env::OPENAI_API_KEY"
models     = ["gpt-4o", "gpt-4o-mini", "o3"]

[targets.openai-gpt4o]
model = "gpt-4o"

# gpt-4o is managed (gateway injects its own key).
# gpt-4o-mini and o3 use passthrough (caller provides key).
The resolution order is always top-down: function match, then route match, then provider match. Models without a function or route fall through to passthrough automatically.

Next steps

Simple Routing

Set up gateway-owned credentials for specific models.

Providers

Supported providers and format translation.

Configuration

Full TOML reference for all gateway settings.