# to11 ## Docs - [Gateway Development](https://to11.ai/docs/contributing/gateway-development.md): How to run and develop the Rust gateway locally. - [Functions](https://to11.ai/docs/gateway/concepts/functions.md): What a function is in the to11 gateway, how it decouples caller intent from model selection, and the two configuration patterns. - [Models](https://to11.ai/docs/gateway/concepts/models.md): Per-model configuration overrides in the to11 gateway, including timeout tuning for reasoning models. - [Providers](https://to11.ai/docs/gateway/concepts/providers.md): What a provider is in the to11 gateway, how credential resolution works, and how providers relate to the routing layers. - [Routes](https://to11.ai/docs/gateway/concepts/routes.md): What a route is in the to11 gateway, how strategies control target selection, and how routes fit into the routing hierarchy. - [Targets](https://to11.ai/docs/gateway/concepts/targets.md): What a target is in the to11 gateway, how it pairs a model with gateway-owned credentials, and how credential resolution works. - [AI Gateway Overview](https://to11.ai/docs/gateway/overview.md): Architecture and capabilities of the to11 AI Gateway — a Rust-based LLM reverse proxy with three-layer routing, inline security, and GenAI telemetry. - [Gateway Quickstart](https://to11.ai/docs/gateway/quickstart.md): Build and run the to11 gateway standalone in under 5 minutes. - [API Reference](https://to11.ai/docs/gateway/reference/api.md): Gateway endpoints, headers, request/response formats, and error codes. - [Configuration Reference](https://to11.ai/docs/gateway/reference/configuration.md): Complete TOML configuration reference for the to11 AI Gateway — every section, field, type, and default. - [Providers](https://to11.ai/docs/gateway/reference/providers.md): Supported LLM providers, model routing, and format translation. - [Streaming](https://to11.ai/docs/gateway/reference/streaming.md): How the gateway handles SSE streaming — fast-path passthrough vs normalised-path translation. - [Experiment Routing](https://to11.ai/docs/gateway/routing/experiment.md): A/B test different models and inference parameters for the same task using weighted experiment variants. - [Fallback Routing](https://to11.ai/docs/gateway/routing/fallback.md): Configure automatic failover so the gateway tries the next provider when the primary one fails. - [Function Routing](https://to11.ai/docs/gateway/routing/functions.md): Define named task aliases so your application sends function names instead of model names, letting you swap models without changing code. - [Routing Overview](https://to11.ai/docs/gateway/routing/overview.md): How the gateway resolves requests through three routing layers: passthrough, managed routing, and function routing. - [Passthrough Routing](https://to11.ai/docs/gateway/routing/passthrough.md): How the gateway acts as a transparent proxy in L1 passthrough mode, forwarding the caller's API key directly to upstream providers. - [Simple Routing](https://to11.ai/docs/gateway/routing/simple.md): Set up gateway-owned credentials so callers can use specific models without providing their own API keys. - [Weighted Routing](https://to11.ai/docs/gateway/routing/weighted.md): Split LLM traffic across multiple providers or API keys using weighted random selection. - [Security Pipeline](https://to11.ai/docs/gateway/security/overview.md): Inline security pipeline for input and output guardrails. - [Content Capture](https://to11.ai/docs/gateway/telemetry/content-capture.md): How to enable and configure prompt and completion recording in GenAI telemetry spans. - [Context Propagation](https://to11.ai/docs/gateway/telemetry/context-propagation.md): How to attach session metadata, tool execution, retrieval, and agent context to your LLM requests. - [Direct Ingestion](https://to11.ai/docs/gateway/telemetry/direct-ingestion.md): How to send custom OpenTelemetry spans directly to the to11 collector from any OTel SDK. - [Distributed Tracing](https://to11.ai/docs/gateway/telemetry/distributed-tracing.md): How to group LLM calls and agent-to-agent communication under a single trace. - [Metrics](https://to11.ai/docs/gateway/telemetry/metrics.md): Histogram metrics, counters, metric dimensions, and ClickHouse query examples. - [Telemetry Overview](https://to11.ai/docs/gateway/telemetry/overview.md): Dual-pipeline architecture, OTel GenAI semantic conventions, and the ten supported operation names. - [Span Attributes](https://to11.ai/docs/gateway/telemetry/span-attributes.md): Complete reference for every OpenTelemetry span attribute emitted by the gateway. - [What is to11?](https://to11.ai/docs/get-started/index.md): Learn what to11 is, who it's for, and how it fits into your AI stack. - [Quickstart](https://to11.ai/docs/get-started/quickstart.md): Get the to11 stack running in 5 minutes with Docker Compose. - [Anthropic SDK](https://to11.ai/docs/guides/anthropic-sdk.md): How to use the Anthropic Python and TypeScript SDKs with to11. - [OpenAI SDK](https://to11.ai/docs/guides/openai-sdk.md): How to use the OpenAI Python and Node.js SDKs with to11. - [Vercel AI SDK](https://to11.ai/docs/guides/vercel-ai-sdk.md): How to use the Vercel AI SDK with to11. - [Self-Hosted Observability](https://to11.ai/docs/self-hosted/observability.md): Use Grafana, Tempo, Loki, and ClickHouse to explore traces, logs, and metrics from your local to11 deployment. ## OpenAPI Specs - [openapi](https://to11.ai/docs/api-reference/openapi.json)