Documentation Index
Fetch the complete documentation index at: https://to11.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Ship Secure AI. One Platform, Every LLM Call.
to11 is an open-source AI engineering platform that replaces 6-8 fragmented tools — tracing, evals, gateway, guardrails, alerting, prompt management — with one integrated product on a shared data plane.Platform Quickstart
Get the full stack running in 5 minutes with Docker Compose.
Gateway Quickstart
Build and run the Rust gateway standalone — no Docker required.
Telemetry
OpenTelemetry GenAI semantic conventions from gateway to ClickHouse.
SDK Guides
Point your existing OpenAI, Anthropic, or Vercel AI SDK at to11.
How It Works
Core Services
| Service | Language | Role |
|---|---|---|
| Gateway | Rust (Axum, Tokio, Hyper) | LLM reverse proxy — tracing, sync guardrails (regex PII, blocklist), routing. <1ms overhead. |
| OTel Collector | Go (custom build) | OTLP ingestion with OIDC auth to ClickHouse (traces + metrics). |
| API | TypeScript, Effect-TS | REST API + API key management + OIDC discovery/JWKS endpoints. |
| Web | Next.js 16, TanStack Query, Auth.js | Dashboard UI, analytics. |
Why to11?
Sub-ms overhead
The Rust gateway adds less than 1ms to your LLM calls. Zero-copy SSE passthrough on the fast path.
Any SDK, any provider
Send requests in OpenAI, Anthropic, or xAI format. to11 routes to any upstream provider and responds in the caller’s native format.
Built-in security
PII detection and blocklist filtering run inline — before your request ever leaves the gateway. ML-based prompt injection detection is coming soon.
Open Source
to11 is licensed under Apache 2.0. Contributions are welcome.GitHub
Star the repo and follow development.
Contributing
Set up your dev environment and contribute to the gateway.