Skip to main content
Manifest is an open-source LLM router for personal agents that routes queries to the cheapest model that can handle them. It comes with a dashboard for tracking tokens, costs, and usage.

Why Manifest

Smart routing

Scores each query and routes it to the cheapest model that can handle it.

Automatic fallbacks

If a model fails, Manifest retries with a backup. No downtime, no manual switching.

Set limits

Get email alerts or block requests when spending crosses a threshold.

500+ models

Access 500+ models across every major provider through one platform, including models from paid subscriptions.

How it works

Manifest intercepts each agent request, scores the query in under 2 ms, assigns a tier (simple / standard / complex / reasoning), and forwards it to the matching model. Token counts, latency, and cost data are captured as requests flow through the router and show up in the dashboard.

Manifest vs OpenRouter

ManifestOpenRouter
Open sourceYesNo
Self-hostableYesNo
PrivacyMetadata only (self-hosted: no middleman)Full request proxied
Routing logicTransparent, open-source scoringBlack box
CostFreePer-token markup
DashboardBuilt-inSeparate

Privacy

Manifest sits between your agent and your LLM providers. In self-hosted mode that middleman runs on your own machine; in Cloud mode it runs on ours. Either way, the actual LLM requests still go out to the provider you configured (Anthropic, OpenAI, etc.).
  • Self-hosted: Requests go from your agent to your Manifest container to the LLM provider. No Manifest server ever sees them. Routing decisions, token counts, costs, and dashboard data all live in your own PostgreSQL.
  • Cloud: Requests transit through app.manifest.build on their way to the LLM provider. Only the model name, token counts, and latency are stored for the dashboard. Message content is not persisted.

Next step

Get started

Set up Manifest and start routing.