Skip to main content

Documentation Index

Fetch the complete documentation index at: https://to11.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Simple Routing

This guide shows you how to set up gateway-owned credentials for specific models. Once configured, callers send requests without an API key and the gateway injects its own.

Prerequisites

  • A working gateway configuration with at least one provider
  • The provider’s API key available as an environment variable

Option A — bare targets

The simplest way to enable managed routing is to declare a [targets.*] block without a corresponding [routes.*] entry. The config builder auto-creates a single route for the target’s model.
[providers.openai]
base_url   = "https://api.openai.com/v1"
credential = "env::OPENAI_API_KEY"
models     = ["gpt-4o", "gpt-4o-mini"]

[targets.openai-gpt4o]
model = "gpt-4o"
With this configuration:
  • gpt-4o is managed — the gateway uses the credential from [providers.openai] (resolved via env::OPENAI_API_KEY). The caller does not need an API key.
  • gpt-4o-mini remains passthrough — the caller must provide their own key.
The bare target inherits its credential from the provider. To use a different key for the managed target, set credential explicitly:
[targets.openai-gpt4o]
model      = "gpt-4o"
credential = "env::MANAGED_OPENAI_KEY"

Option B — explicit route

To make the routing configuration self-documenting, you can declare both the target and the route explicitly:
[targets.openai-gpt4o]
model = "gpt-4o"

[routes.primary]
models   = ["gpt-4o"]
strategy = "single"
targets  = ["openai-gpt4o"]
This is functionally equivalent to the bare target in Option A. Use explicit routes when you want named routes for clarity or when you plan to add weighted or fallback strategies later.

Testing it

Start the gateway with your configuration, then send a request without an Authorization header:
curl http://localhost:4000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Hello"}]
  }'
The gateway injects its own API key and proxies the request to OpenAI. You should receive a standard chat completion response.
If the environment variable referenced by credential is not set at startup, the gateway fails fast with a configuration error. Check your environment before starting.

Mixed mode

Managed and passthrough routing coexist in a single configuration. Models with a matching target or route use gateway-owned credentials; everything else falls through to passthrough.
[providers.openai]
base_url   = "https://api.openai.com/v1"
credential = "env::OPENAI_API_KEY"
models     = ["gpt-4o", "gpt-4o-mini", "o3"]

[providers.anthropic]
base_url   = "https://api.anthropic.com/v1"
credential = "env::ANTHROPIC_API_KEY"
models     = ["claude-sonnet-4-6"]

[targets.openai-gpt4o]
model = "gpt-4o"

[targets.anthropic-sonnet]
model = "claude-sonnet-4-6"
ModelRouting modeCaller needs API key?
gpt-4oManagedNo
claude-sonnet-4-6ManagedNo
gpt-4o-miniPassthroughYes
o3PassthroughYes
To verify that passthrough still works for unmanaged models, send a request with your own key:
curl http://localhost:4000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [{"role": "user", "content": "Hello"}]
  }'

Credential resolution order

When the gateway resolves a managed target’s credential, it checks in this order:
  1. Target-level credential — explicit on the [targets.*] block
  2. Provider-level credential — declared on the [providers.*] block
  3. Convention-based — derived from the provider name (e.g. env::OPENAI_API_KEY for openai)
  4. Error — if none of the above resolves, the gateway fails at startup

Next steps

Weighted Routing

Distribute traffic across multiple targets.

Fallback

Automatic failover between providers.

Configuration

Full TOML reference for all gateway settings.