<!-- source: https://modelux.ai/docs/quickstart -->

> Send your first request through Modelux in under 2 minutes.

# Quickstart

Two minutes from zero to routing. This guide walks you through: creating an
account, adding a provider, creating a project + API key, and sending your
first request.

## 1. Create an account

Go to [app.modelux.ai](https://app.modelux.ai/login) and sign in with
Google or a passwordless email link. When you log in for the first time,
Modelux creates a personal organization for you.

## 2. Add a provider

Modelux is BYO-keys — we proxy requests using your own provider credentials.

1. Open **Providers** in the sidebar.
2. Click **Add provider**.
3. Pick a provider (OpenAI, Anthropic, Google, Azure, Bedrock, etc.).
4. Paste your API key. Modelux stores it encrypted and verifies it with a
   test call.

## 3. Create a project

Projects group routing configs, API keys, and usage analytics.

1. Open **Projects** in the sidebar.
2. Click **Create project**. Give it a name like `my-app`.
3. Create an API key scoped to the project — it'll be shown once, prefixed
   with `mlx_sk_`.

## 4. Configure routing (optional)

By default, you can call any model directly by name (`gpt-4o`, `claude-sonnet-4-5`,
etc.) and Modelux will route it to the matching provider.

For more advanced routing — fallbacks, ensembles, cost optimization — create
a **routing config** under **Routing** in the sidebar. Each config gets a
stable slug like `@production` that your app calls instead of a raw model
name.

## 5. Send your first request

The OpenAI SDK works unchanged. Just swap the `base_url` and API key:

<CodeTabs labels={["Python", "Node", "curl"]}>

```python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.modelux.ai/v1",
    api_key="mlx_sk_...",
)

response = client.chat.completions.create(
    model="gpt-4o-mini",          # or "@production" for a routing config
    messages=[{"role": "user", "content": "Hello!"}],
)

print(response.choices[0].message.content)
```

```javascript
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.modelux.ai/v1",
  apiKey: process.env.MODELUX_API_KEY,
});

const response = await client.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "Hello!" }],
});

console.log(response.choices[0].message.content);
```

```bash
curl https://api.modelux.ai/v1/chat/completions \
  -H "Authorization: Bearer $MODELUX_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'
```

</CodeTabs>

## What to do next

- **[Routing concepts](/docs/concepts/routing)** — Understand how routing configs work.
- **[Set up a fallback chain](/docs/guides/fallback-chain)** — Reliability in 5 minutes.
- **[Cost optimization](/docs/guides/cost-optimization)** — Cut your bill with smart routing.
- **[MCP setup](/docs/guides/mcp-setup)** — Manage Modelux from Claude Code.
