Documentation Index
Fetch the complete documentation index at: https://to11.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Quickstart
This guide gets the full to11 stack running locally — gateway, OTel Collector, ClickHouse, and observability — in under 5 minutes.Prerequisites
- Docker and Docker Compose v2+
- An API key for at least one LLM provider (OpenAI or Anthropic)
1. Clone the repository
2. Set your API keys
3. Start the stack
latest images. To pin a specific build:
| Service | Port | Description |
|---|---|---|
| Gateway | 4000 | LLM proxy endpoint |
| ClickHouse | 8123 | Telemetry storage |
| OTel Collector | 4317 / 4318 | GenAI OTLP (gRPC / HTTP) |
| Grafana | 3001 | Dashboards |
| Tempo | 3200 | Distributed traces |
| Valkey | 6379 | Auth cache |
| Token Issuer | 4400 | API key bootstrap |
4. Send a test request
5. Verify it worked
Check that the request was proxied successfully:What’s Next?
Gateway Overview
Understand the gateway architecture and capabilities.
Use the OpenAI SDK
Point your existing OpenAI SDK at to11 with a one-line change.
Telemetry
See how your LLM calls flow through the observability pipeline.
Configuration
Full TOML configuration reference for providers, security, and telemetry.