MODRO
Model Router

Docs

Clear enough for first use, short enough to keep open beside the terminal.

These docs are intentionally lightweight. They explain the product contract users actually need: your own OpenAI key, our hosted /v1, private signed dashboard access, model presets, and tier roles.

How it works

Hosted endpoint

Your app sends requests to our hosted /v1/chat/completions endpoint using your OpenAI API key as the bearer token.

Tenant-scoped policy

Each API key maps to a tenant-scoped model policy. Users can change their own utility, fast, and deep tiers from the signed settings page.

Signed dashboard flow

Every tenant gets a signed private dashboard URL from the response header. No separate end-user auth flow is required for the client experience.

Presets

Cheapest

Pushes the most traffic into the lowest-cost path. Best when speed and spend matter more than richer answers.

Balanced

The default preset. Good for most teams that want reliable output quality and obvious savings without extra tuning.

Highest quality

Uses stronger fast and deep tiers for teams prioritizing answer quality over token savings.

Tier roles

Utility

Lowest-cost work. Good for small transforms, boilerplate, and routine low-stakes traffic.

Fast

Main production tier. This should handle most traffic well when the prompt does not need extra depth.

Deep

Highest-cost tier. Reserve it for harder prompts, richer explanation, or cases where completeness matters.

FAQ

Do I use your API key or mine?

You use your own OpenAI API key. This product is a routing layer and hosted endpoint, not a separate billing identity.

What URL do I point my app to?

Point your client to our hosted base URL and send OpenAI-compatible traffic to /v1/chat/completions.

How do I reach the private dashboard?

Read the X-Client-Dashboard-Url response header from your first successful request and open that signed link.

Can I choose specific models?

Yes. Open /client/settings from your signed dashboard flow and choose a preset or set utility, fast, and deep models directly.

Will users see admin analytics?

No. The client pages are intentionally separate from admin. End users only get their own private dashboard and settings flow.