Providers
A provider is an upstream LLM vendor — OpenAI, Anthropic, Google, Azure OpenAI, AWS Bedrock, Groq, Fireworks. Modelux proxies your requests using provider credentials you supply (BYO keys). We don’t mark up per-token costs.
Supported providers
| Provider | Status |
|---|---|
| OpenAI | Shipped |
| Anthropic | Shipped |
| Google (Gemini) | Shipped |
| Azure OpenAI | Shipped |
| AWS Bedrock | Shipped |
| Groq | In progress |
| Fireworks | In progress |
Adding a provider
- Open Providers in the dashboard.
- Click Add provider.
- Select the vendor, paste your API key, optionally set a base URL for self-hosted or regional endpoints.
- Modelux stores the credential encrypted and runs a verification call before marking it active.
Health monitoring
Modelux tracks provider health continuously:
- Success rate — rolling window of 2xx vs 4xx/5xx
- p50 latency — per-model, per-region where applicable
- Last check timestamp — indicates how fresh the health signal is
When a provider is marked unhealthy, health-aware routing strategies automatically prefer other providers until it recovers.
Credential rotation
Rotate a provider’s API key without downtime:
- Edit the provider in the dashboard
- Paste the new key and save
- Modelux verifies the new key, then atomically swaps it
Old in-flight requests finish with the old key; new requests pick up the new key immediately.
Custom base URLs
For Azure OpenAI deployments, self-hosted vLLM endpoints, or regional Bedrock routes, set a custom base URL when creating the provider. Modelux will use that URL for all requests routed to this provider.