Skip to main content

Documentation Index

Fetch the complete documentation index at: https://to11.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Quickstart

This guide gets the full to11 stack running locally — gateway, OTel Collector, ClickHouse, and observability — in under 5 minutes.

Prerequisites

  • Docker and Docker Compose v2+
  • An API key for at least one LLM provider (OpenAI or Anthropic)

1. Clone the repository

git clone https://github.com/soerenmartius/llm-development-platform.git
cd llm-development-platform

2. Set your API keys

export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...

3. Start the stack

docker compose -f docker-compose.production.yml up -d
By default this pulls the latest images. To pin a specific build:
IMAGE_TAG=sha-abc1234 docker compose -f docker-compose.production.yml up -d
This starts all services:
ServicePortDescription
Gateway4000LLM proxy endpoint
ClickHouse8123Telemetry storage
OTel Collector4317 / 4318GenAI OTLP (gRPC / HTTP)
Grafana3001Dashboards
Tempo3200Distributed traces
Valkey6379Auth cache
Token Issuer4400API key bootstrap

4. Send a test request

curl http://localhost:4000/v1/chat/completions \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Hello from to11!"}],
    "max_tokens": 128
  }'

5. Verify it worked

Check that the request was proxied successfully:
# Gateway health
curl http://localhost:4000/health

# ClickHouse has data (once telemetry is enabled)
curl "http://localhost:8123/?query=SELECT+count()+FROM+otel_traces"

What’s Next?

Gateway Overview

Understand the gateway architecture and capabilities.

Use the OpenAI SDK

Point your existing OpenAI SDK at to11 with a one-line change.

Telemetry

See how your LLM calls flow through the observability pipeline.

Configuration

Full TOML configuration reference for providers, security, and telemetry.