Migrating from OpenAI
If your app already uses the OpenAI SDK, you can point it at Modelux with three changes — no other code modifications needed.
1. Add OpenAI as a provider in Modelux
In the dashboard, add your existing OpenAI API key as a provider. Modelux will use that key to proxy requests — you keep your existing OpenAI account, billing, and rate limits.
2. Create a Modelux API key
Create a project, then generate an API key scoped to it. Copy the
mlx_sk_... value.
3. Update your client config
Change two lines in your app:
from openai import OpenAI
client = OpenAI(
- api_key=os.environ["OPENAI_API_KEY"],
+ base_url="https://api.modelux.ai/v1",
+ api_key=os.environ["MODELUX_API_KEY"],
)
That’s it. Your existing client.chat.completions.create(...) calls work
unchanged. Model names like gpt-4o-mini are routed to OpenAI through your
credentials.
What you get for free
Just by routing through Modelux, with zero other code changes:
- Full request logs with searchable traces
- Per-request cost tracking
- Latency percentiles by model
- Team-level analytics if multiple apps share one org
Next steps
Once traffic is flowing, you can add value without further code changes:
- Add a fallback chain to improve reliability — create a routing config
@productionthat falls back fromgpt-4o-minitoclaude-haiku-4-5, then update your app to callmodel="@production"instead ofgpt-4o-mini. - Set a monthly budget with auto-downgrade to cap your spend.
- Enable the replay simulator to test changes against historical traffic.
Streaming still works
Streaming responses pass through unchanged:
stream = client.chat.completions.create(
model="@production",
messages=[...],
stream=True,
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")