<!-- source: https://modelux.ai/docs/ -->

> Modelux is the control plane for LLMs. Policy-driven routing, finance-grade budgets, decision traces, and replay across every provider.

# Modelux Docs

Welcome. Modelux is the **control plane for your LLM stack**. You point
your OpenAI SDK at Modelux and get policy-driven routing across every
provider, finance-grade budgets, full decision traces, and a replay
simulator &mdash; without changing your application code.

## Start here

- **[Quickstart](/docs/quickstart)** — Send your first request in under 2 minutes.
- **[Concepts / Routing](/docs/concepts/routing)** — How routing configs work.
- **[API Reference](/docs/api/overview)** — The proxy and management APIs.

## What Modelux does

- **Policy-driven routing.** Fallback chains, cost-optimized, latency-optimized, ensembles, A/B tests, cascades, custom rule DSL across OpenAI, Anthropic, Google, Azure, Bedrock, Groq, Fireworks.
- **Finance-grade budgets.** Scoped spend caps with auto-downgrade, alerts, and tag-based attribution.
- **Decision-level observability.** Every request stores the full routing decision: attempts, reasons, per-attempt timings and costs.
- **Replay & versioning.** Configs are versioned with one-click rollback. Replay historical traffic against candidate configs before you ship them.
- **Audit & governance.** Audit log, role-based access, SSO/SAML, IP allowlists.
- **AI-native management.** REST API + MCP server &mdash; manage everything from your AI agent.

## What Modelux doesn't do

- Prompt management / versioning (use a dedicated tool)
- Model fine-tuning or hosting (we route to providers)
- Prompt evaluation (planned, not shipped)
