Why Manifest
Smart routing
Scores each query and routes it to the cheapest model that can handle it.
Automatic fallbacks
If a model fails, Manifest retries with a backup. No downtime, no manual switching.
Set limits
Get email alerts or block requests when spending crosses a threshold.
500+ models
Access 500+ models across every major provider through one platform, including models from paid subscriptions.
How it works
Manifest intercepts each agent request, scores the query in under 2 ms, assigns a tier (simple / standard / complex / reasoning), and forwards it to the matching model. Token counts, latency, and cost data are captured as requests flow through the router and show up in the dashboard.Manifest vs OpenRouter
| Manifest | OpenRouter | |
|---|---|---|
| Open source | Yes | No |
| Self-hostable | Yes | No |
| Privacy | Metadata only (self-hosted: no middleman) | Full request proxied |
| Routing logic | Transparent, open-source scoring | Black box |
| Cost | Free | Per-token markup |
| Dashboard | Built-in | Separate |
Privacy
Manifest sits between your agent and your LLM providers. In self-hosted mode that middleman runs on your own machine; in Cloud mode it runs on ours. Either way, the actual LLM requests still go out to the provider you configured (Anthropic, OpenAI, etc.).- Self-hosted: Requests go from your agent to your Manifest container to the LLM provider. No Manifest server ever sees them. Routing decisions, token counts, costs, and dashboard data all live in your own PostgreSQL.
- Cloud: Requests transit through app.manifest.build on their way to the LLM provider. Only the model name, token counts, and latency are stored for the dashboard. Message content is not persisted.
Next step
Get started
Set up Manifest and start routing.